Has the following functions:
1: The search bar displays the search history of the currently logged-in individual user, and deletes the personal history
2: The user enters a certain character in the search bar, and the The character is recorded in redis stored in zset format, and the number of searches for the character and the current timestamp are recorded (DFA algorithm is used, if you are interested, you can learn it on Baidu)
Whenever the user queries When characters exist in Redis, the count is accumulated in order to obtain the top ten most popular query data on the platform. You can write the API yourself or add some keywords in Redis in advance
4: Finally, you need to do the indecent text filtering function. This is very important, you know.
The code implements hot search and personal search record functions. It only requires a few methods under the main controller layer:
1: Add hot search words to redis (use the following indecent text filter when adding Method to filter this word, and then store it if it is legal
2: Each click gives the related word a popularity of 1
3: Search the top ten related words based on the key
4: Insert personal search records
5: Query personal search records
First configure the redis data source and other basics
Finally paste the core service layer code :
package com.****.****.****.user; import com.jianlet.service.user.RedisService; import org.apache.commons.lang.StringUtils; import org.springframework.data.redis.core.*; import org.springframework.stereotype.Service; import javax.annotation.Resource; import java.util.*; import java.util.concurrent.TimeUnit; /** * @author: mrwanghc * @date: 2020/5/13 * @description: */ @Transactional @Service("redisService") public class RedisServiceImpl implements RedisService { //导入数据源 @Resource(name = "redisSearchTemplate") private StringRedisTemplate redisSearchTemplate; //新增一条该userid用户在搜索栏的历史记录 //searchkey 代表输入的关键词 @Override public int addSearchHistoryByUserId(String userid, String searchkey) { String shistory = RedisKeyUtils.getSearchHistoryKey(userid); boolean b = redisSearchTemplate.hasKey(shistory); if (b) { Object hk = redisSearchTemplate.opsForHash().get(shistory, searchkey); if (hk != null) { return 1; }else{ redisSearchTemplate.opsForHash().put(shistory, searchkey, "1"); } }else{ redisSearchTemplate.opsForHash().put(shistory, searchkey, "1"); } return 1; } //删除个人历史数据 @Override public Long delSearchHistoryByUserId(String userid, String searchkey) { String shistory = RedisKeyUtils.getSearchHistoryKey(userid); return redisSearchTemplate.opsForHash().delete(shistory, searchkey); } //获取个人历史数据列表 @Override public List<String> getSearchHistoryByUserId(String userid) { List<String> stringList = null; String shistory = RedisKeyUtils.getSearchHistoryKey(userid); boolean b = redisSearchTemplate.hasKey(shistory); if(b){ Cursor<Map.Entry<Object, Object>> cursor = redisSearchTemplate.opsForHash().scan(shistory, ScanOptions.NONE); while (cursor.hasNext()) { Map.Entry<Object, Object> map = cursor.next(); String key = map.getKey().toString(); stringList.add(key); } return stringList; } return null; } //新增一条热词搜索记录,将用户输入的热词存储下来 @Override public int incrementScoreByUserId(String searchkey) { Long now = System.currentTimeMillis(); ZSetOperations zSetOperations = redisSearchTemplate.opsForZSet(); ValueOperations<String, String> valueOperations = redisSearchTemplate.opsForValue(); List<String> title = new ArrayList<>(); title.add(searchkey); for (int i = 0, lengh = title.size(); i < lengh; i++) { String tle = title.get(i); try { if (zSetOperations.score("title", tle) <= 0) { zSetOperations.add("title", tle, 0); valueOperations.set(tle, String.valueOf(now)); } } catch (Exception e) { zSetOperations.add("title", tle, 0); valueOperations.set(tle, String.valueOf(now)); } } return 1; } //根据searchkey搜索其相关最热的前十名 (如果searchkey为null空,则返回redis存储的前十最热词条) @Override public List<String> getHotList(String searchkey) { String key = searchkey; Long now = System.currentTimeMillis(); List<String> result = new ArrayList<>(); ZSetOperations zSetOperations = redisSearchTemplate.opsForZSet(); ValueOperations<String, String> valueOperations = redisSearchTemplate.opsForValue(); Set<String> value = zSetOperations.reverseRangeByScore("title", 0, Double.MAX_VALUE); //key不为空的时候 推荐相关的最热前十名 if(StringUtils.isNotEmpty(searchkey)){ for (String val : value) { if (StringUtils.containsIgnoreCase(val, key)) { if (result.size() > 9) {//只返回最热的前十名 break; } Long time = Long.valueOf(valueOperations.get(val)); if ((now - time) < 2592000000L) {//返回最近一个月的数据 result.add(val); } else {//时间超过一个月没搜索就把这个词热度归0 zSetOperations.add("title", val, 0); } } } }else{ for (String val : value) { if (result.size() > 9) {//只返回最热的前十名 break; } Long time = Long.valueOf(valueOperations.get(val)); if ((now - time) < 2592000000L) {//返回最近一个月的数据 result.add(val); } else {//时间超过一个月没搜索就把这个词热度归0 zSetOperations.add("title", val, 0); } } } return result; } //每次点击给相关词searchkey热度 +1 @Override public int incrementScore(String searchkey) { String key = searchkey; Long now = System.currentTimeMillis(); ZSetOperations zSetOperations = redisSearchTemplate.opsForZSet(); ValueOperations<String, String> valueOperations = redisSearchTemplate.opsForValue(); zSetOperations.incrementScore("title", key, 1); valueOperations.getAndSet(key, String.valueOf(now)); return 1; } }
The core part has been written, and the rest requires you to integrate the above methods into your own code.
The code implements the function of filtering indecent text, in springboot Write a configuration class with the @Configuration annotation and load it when the project starts. The code is as follows:
package com.***.***.interceptor; import org.springframework.context.annotation.Configuration; import org.springframework.core.io.ClassPathResource; import java.io.*; import java.util.HashMap; import java.util.HashSet; import java.util.Map; import java.util.Set; //屏蔽敏感词初始化 @Configuration @SuppressWarnings({ "rawtypes", "unchecked" }) public class SensitiveWordInit { // 字符编码 private String ENCODING = "UTF-8"; // 初始化敏感字库 public Map initKeyWord() throws IOException { // 读取敏感词库 ,存入Set中 Set<String> wordSet = readSensitiveWordFile(); // 将敏感词库加入到HashMap中//确定有穷自动机DFA return addSensitiveWordToHashMap(wordSet); } // 读取敏感词库 ,存入HashMap中 private Set<String> readSensitiveWordFile() throws IOException { Set<String> wordSet = null; ClassPathResource classPathResource = new ClassPathResource("static/censorword.txt"); InputStream inputStream = classPathResource.getInputStream(); //敏感词库 try { // 读取文件输入流 InputStreamReader read = new InputStreamReader(inputStream, ENCODING); // 文件是否是文件 和 是否存在 wordSet = new HashSet<String>(); // StringBuffer sb = new StringBuffer(); // BufferedReader是包装类,先把字符读到缓存里,到缓存满了,再读入内存,提高了读的效率。 BufferedReader br = new BufferedReader(read); String txt = null; // 读取文件,将文件内容放入到set中 while ((txt = br.readLine()) != null) { wordSet.add(txt); } br.close(); // 关闭文件流 read.close(); } catch (Exception e) { e.printStackTrace(); } return wordSet; } // 将HashSet中的敏感词,存入HashMap中 private Map addSensitiveWordToHashMap(Set<String> wordSet) { // 初始化敏感词容器,减少扩容操作 Map wordMap = new HashMap(wordSet.size()); for (String word : wordSet) { Map nowMap = wordMap; for (int i = 0; i < word.length(); i++) { // 转换成char型 char keyChar = word.charAt(i); // 获取 Object tempMap = nowMap.get(keyChar); // 如果存在该key,直接赋值 if (tempMap != null) { nowMap = (Map) tempMap; } // 不存在则,则构建一个map,同时将isEnd设置为0,因为他不是最后一个 else { // 设置标志位 Map<String, String> newMap = new HashMap<String, String>(); newMap.put("isEnd", "0"); // 添加到集合 nowMap.put(keyChar, newMap); nowMap = newMap; } // 最后一个 if (i == word.length() - 1) { nowMap.put("isEnd", "1"); } } } return wordMap; } }
Then this is the tool class code:
package com.***.***.interceptor; import java.io.IOException; import java.util.HashSet; import java.util.Iterator; import java.util.Map; import java.util.Set; //敏感词过滤器:利用DFA算法 进行敏感词过滤 public class SensitiveFilter { //敏感词过滤器:利用DFA算法 进行敏感词过滤 private Map sensitiveWordMap = null; // 最小匹配规则 public static int minMatchType = 1; // 最大匹配规则 public static int maxMatchType = 2; // 单例 private static SensitiveFilter instance = null; // 构造函数,初始化敏感词库 private SensitiveFilter() throws IOException { sensitiveWordMap = new SensitiveWordInit().initKeyWord(); } // 获取单例 public static SensitiveFilter getInstance() throws IOException { if (null == instance) { instance = new SensitiveFilter(); } return instance; } // 获取文字中的敏感词 public Set<String> getSensitiveWord(String txt, int matchType) { Set<String> sensitiveWordList = new HashSet<String>(); for (int i = 0; i < txt.length(); i++) { // 判断是否包含敏感字符 int length = CheckSensitiveWord(txt, i, matchType); // 存在,加入list中 if (length > 0) { sensitiveWordList.add(txt.substring(i, i + length)); // 减1的原因,是因为for会自增 i = i + length - 1; } } return sensitiveWordList; } // 替换敏感字字符 public String replaceSensitiveWord(String txt, int matchType, String replaceChar) { String resultTxt = txt; // 获取所有的敏感词 Set<String> set = getSensitiveWord(txt, matchType); Iterator<String> iterator = set.iterator(); String word = null; String replaceString = null; while (iterator.hasNext()) { word = iterator.next(); replaceString = getReplaceChars(replaceChar, word.length()); resultTxt = resultTxt.replaceAll(word, replaceString); } return resultTxt; } /** * 获取替换字符串 * * @param replaceChar * @param length * @return */ private String getReplaceChars(String replaceChar, int length) { String resultReplace = replaceChar; for (int i = 1; i < length; i++) { resultReplace += replaceChar; } return resultReplace; } /** * 检查文字中是否包含敏感字符,检查规则如下:<br> * 如果存在,则返回敏感词字符的长度,不存在返回0 * @param txt * @param beginIndex * @param matchType * @return */ public int CheckSensitiveWord(String txt, int beginIndex, int matchType) { // 敏感词结束标识位:用于敏感词只有1位的情况 boolean flag = false; // 匹配标识数默认为0 int matchFlag = 0; Map nowMap = sensitiveWordMap; for (int i = beginIndex; i < txt.length(); i++) { char word = txt.charAt(i); // 获取指定key nowMap = (Map) nowMap.get(word); // 存在,则判断是否为最后一个 if (nowMap != null) { // 找到相应key,匹配标识+1 matchFlag++; // 如果为最后一个匹配规则,结束循环,返回匹配标识数 if ("1".equals(nowMap.get("isEnd"))) { // 结束标志位为true flag = true; // 最小规则,直接返回,最大规则还需继续查找 if (SensitiveFilter.minMatchType == matchType) { break; } } } // 不存在,直接返回 else { break; } } if (SensitiveFilter.maxMatchType == matchType){ if(matchFlag < 2 || !flag){ //长度必须大于等于1,为词 matchFlag = 0; } } if (SensitiveFilter.minMatchType == matchType){ if(matchFlag < 2 && !flag){ //长度必须大于等于1,为词 matchFlag = 0; } } return matchFlag; } }
Call it directly in the controller layer of your code You can judge by the method:
//非法敏感词汇判断 SensitiveFilter filter = SensitiveFilter.getInstance(); int n = filter.CheckSensitiveWord(searchkey,0,1); if(n > 0){ //存在非法字符 logger.info("这个人输入了非法字符--> {},不知道他到底要查什么~ userid--> {}",searchkey,userid); return null; }
You can also replace the sensitive text with * and other characters:
SensitiveFilter filter = SensitiveFilter.getInstance(); String text = "敏感文字"; String x = filter.replaceSensitiveWord(text, 1, "*");
Lastly, the censorword.text file was used in the SensitiveWordInit.java just now and placed in the resources of your project. In the static directory under the directory, this file is a collection of indecent text. It also needs you to update with the times. This file will be loaded when the project starts.
The above is the detailed content of How to use java and redis to implement a simple hot search function. For more information, please follow other related articles on the PHP Chinese website!