I recently received a job, which seems very ordinary. I use ajax to read the article and then load the article dynamically when the browser scrolls. But here comes the problem. The article source interface is provided in XML, and the article source and front-end are different. Not under the same domain name, this involves XML cross-domain issues, but the article source provides a crawling system that can use xslt, so that XML can be converted into html, then can html be converted into json and then accessed cross-domain? Or can xslt directly convert xml to json for cross-domain access?
Ask the other party to allow cross-domain access
Write a service on your own backend to capture it, and then the frontend reads and displays it from your own server
There is postHtml on the node side
If you don’t know the browser side, you can search it on github
Constructor and export (can be changed to class)
xml template string
Call the method to convert xml to json and output it to the console for viewing
Browser Ajax cross-domain has nothing to do with XML or JSON format.
What you need is an Ajax cross-domain solution.
I don’t know if there is something wrong with my understanding. Aren’t the requirements of the questioner similar to those of a crawler? Use a crawler to crawl back and then parse it to the front desk? I don’t know if I understand it wrong, or it can also be achieved using a crawler
You can use iframe to embed web pages in web pages