Try the following options -
1. Safer export via Smart Search index:
- If you have a Smart Search index of Documents, you can run a search index rebuild, then export that index table
- You’ll get all published, non-archived pages with URLs
- Works great for large sites
2. Use the API:
var nodes = TreeHelper.SelectNodes()
.OnSite("yoursite")
.Published(true)
.WhereEquals("DocumentIsArchived", false)
.OrderBy("NodeAliasPath");
Export NodeAliasPath
+ NodeID
+ DocumentName
to CSV.
Direct SQL query: If you have DB access, run this on View_CMS_Tree_Joined:
SELECT
NodeID,
NodeAliasPath,
DocumentName,
DocumentCulture,
DocumentUrlPath,
NodeSiteID,
NodeLevel,
NodeParentID
FROM
View_CMS_Tree_Joined
WHERE
DocumentIsArchived = 0
AND NodeSiteID = [YourSiteID]
AND Published = 1
ORDER BY
NodeAliasPath
This bypasses any UI limits and gives you all pages exactly as stored.