For example, my first access request returned json: {"n": 1}
My 100th access request returned json: {"n": 100}
The traditional method of writing to the database and then checking the database to return the results seems not guaranteed when the concurrency is large. What should I do? This should be the most simplified question
For example, my first access request returned json: {"n": 1}
My 100th access request returned json: {"n": 100}
The traditional method of writing to the database and then checking the database to return the results seems not guaranteed when the concurrency is large. What should I do? This should be the most simplified question
The simplest way is to build a mysql table with an auto-incremented primary key of id, then insert a record for each request, and then read the record. The id read out is the value you want.
Then you can easily handle high-concurrency scenarios based on the id value. For example, [Instant Kill] can use the rule that the id is less than 300 and divisible by 6 to consider the instant kill successful; [Lottery] can use the rule that the id is less than 300 and divisible by 6; [Lottery] can use the rule that the id is divisible by 100 (one hundredth) Probability) as winning etc.
If you implement it yourself, it is nothing more than a single-threaded infinite loop processing socket requests and maintaining a global variable. It is not as convenient and reliable as using ready-made mysql.
If it is java
, a global AtomicLong
can meet your needs, getAndIncrement
atomic operation, plus volatile
modification, if it is other languages, it will be similar
Using setnx(id) in redis, a single thread guarantees increment by 1 each time, and it is also an in-memory database, so it is super fast.
Read operation: use cache
Write operation: asynchronous writing using queue
In pure Java, the counter object can be made into a singleton, and all requests to the calculator can be intercepted by filter
and incremented by 1 (synchronization is required). I don’t know what you mean by database, {n : 100}
, n
is taken from the database?
In fact, what you want to do is a permanent memory queue, which is queued and processed according to the request.
On a single machine, you can try reading and writing SQLite from /dev/shm on the Linux memory file system (tmpfs).
Reading files does not require going through the network , and there is no need to implement memory resident, lock, auto-increment and unique constraints by yourself.
<code><?php header('Content-Type: text/plain; charset=utf-8'); // sudo mkdir -m 777 /dev/shm/app $file = '/dev/shm/app/data.db3'; $ddl = " BEGIN; CREATE TABLE IF NOT EXISTS queue ( id INTEGER PRIMARY KEY AUTOINCREMENT, user_id INTEGER ); CREATE UNIQUE INDEX IF NOT EXISTS queue_user_id_idx ON queue(user_id); COMMIT; "; if(!file_exists($file)) { //多核下多进程并发时可能都会进入到这个判断分支,所以DDL中要用IF NOT EXISTS $db = new PDO('sqlite:'.$file); $db->exec($ddl); // pdo_sqlite 的 query 和 prepare 不支持一次执行多条SQL语句 } else { $db = new PDO('sqlite:'.$file); } $stmt = $db->prepare('INSERT INTO queue(user_id) VALUES(?)'); $stmt->execute(array(time())); //time()换成你的用户ID echo $stmt->rowCount()."\n"; //查询中受影响(改动)的行数,插入失败时为0 echo $db->lastInsertId(); //插入的自增ID,插入失败时为0 // php -S 127.0.0.1:8080 -t /home/eechen/www >/dev/null 2>&1 & // ab -c100 -n1000 http://127.0.0.1:8080/</code>
The simplest way is to use redis's zset for auto-increment, which is efficient and simple. If you are using a single machine, you can also consider using atomiclong (it will become invalid after a shutdown and restart)