All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The data I am receiving sends multiple JSON objects that have the same keys within them. EDIT: I've added a sample log. This is a single event that i need to count each DELETE_RETIRED_DEVICE, so ... See more...
The data I am receiving sends multiple JSON objects that have the same keys within them. EDIT: I've added a sample log. This is a single event that i need to count each DELETE_RETIRED_DEVICE, so 3 in this case. There are no commas between the JSON objects, they are 3 separate objects. {"connectedCloudName":"","logType":"userAction","version":1,"loggedAt":1580947200024,"actionAt":1580947200024,"device":{"uuid":"","phoneNumber":"","platform":"Android 8.0"},"actor":{"miUserId":9062,"principal":"","email":"-"},"configuration":null,"updatedBlob":null,"certificateDetails":null,"message":null,"spaceName":"Global","spacePath":"/1/","actionType":"DELETE_RETIRED_DEVICE","requestedAt":1580947200024,"completedAt":1580947200024,"reason":"Deleted the retired device successfully","status":"Success","objectId":null,"objectType":null,"objectName":null,"subjectId":"","subjectType":"Smartphone","subjectName":" (Android 8.0 - 12406901520)","subjectOwnerName":null,"requesterName":"misystem","updateRequestId":null,"userInRole":null,"parentId":null,"cookie":null} {"connectedCloudName":"","logType":"userAction","version":1,"loggedAt":1580947200292,"actionAt":1580947200292,"device":null,"actor":null,"configuration":null,"updatedBlob":null,"certificateDetails":null,"message":null,"spaceName":null,"spacePath":null,"actionType":"SYSTEM_CONFIG_CHANGE","requestedAt":1580947200292,"completedAt":1580947200292,"reason":"Modify Preference lastDeleteRetiredDevicesStatus from Successful, 2020-02-05 00:00:00 UTC to Successful, 2020-02-06 00:00:00 UTC","status":"Success","objectId":null,"objectType":null,"objectName":null,"subjectId":null,"subjectType":"Settings Preferences","subjectName":"System","subjectOwnerName":null,"requesterName":"misystem","updateRequestId":null,"userInRole":null,"parentId":null,"cookie":null} {"connectedCloudName":"","logType":"userAction","version":1,"loggedAt":1580947200292,"actionAt":1580947200292,"device":null,"actor":null,"configuration":null,"updatedBlob":null,"certificateDetails":null,"message":null,"spaceName":null,"spacePath":null,"actionType":"DELETE_RETIRED_DEVICE","requestedAt":1580947200292,"completedAt":1580947200292,"reason":"Initiated retired device count = 2, deleted retired device count = 2","status":"Success","objectId":null,"objectType":null,"objectName":null,"subjectId":null,"subjectType":null,"subjectName":"misystem (Source - DailyJob, Bulk deletion - 2)","subjectOwnerName":null,"requesterName":"misystem","updateRequestId":null,"userInRole":null,"parentId":null,"cookie":null} {"connectedCloudName":"","logType":"userAction","version":1,"loggedAt":1580947200011,"actionAt":1580947200011,"device":null,"actor":null,"configuration":null,"updatedBlob":null,"certificateDetails":null,"message":null,"spaceName":null,"spacePath":null,"actionType":"DELETE_RETIRED_DEVICE","requestedAt":1580947200011,"completedAt":1580947200011,"reason":"Initiating bulk deletion of 2 retired device(s)","status":"Initiated","objectId":null,"objectType":null,"objectName":null,"subjectId":null,"subjectType":null,"subjectName":"misystem (Source - DailyJob, Bulk deletion - 2)","subjectOwnerName":null,"requesterName":"misystem","updateRequestId":null,"userInRole":null,"parentId":null,"cookie":null} Below is the abbreviated objects: {actionType ... other keys/values} {actionType ... other keys/values} {actionType ... other keys/values}
i am trying to cofigure a search head cluster and getting an error socket_error= connect timeout while electing the captain. I have configured the deployer and the 3 search head successfully and... See more...
i am trying to cofigure a search head cluster and getting an error socket_error= connect timeout while electing the captain. I have configured the deployer and the 3 search head successfully and also initiated the cluster on all three instance. I also restarted all three splunk search head server before electing the captain. tested all ports connectivity but still getting an error while running a shcaptain command - ./splunk bootstrap shcluster-captain -servers_list ":,:,..." -auth : I have followed all the steps very carefully but still cant able to elect one search head as a captain Note-1- I am not running the above the command in deployer. Running it on the search head member 2- the URI and the managment port in the above command i have given for the search head members with https and 8089 with comma seperated. Please someone help me in resolving this issue its urjent
I would like to show each percent of my values sepratly in a dashboard,(Becuse of the SVG input) How can I do dat ?! for example I need this : SuspendedEV 12.897025
I know the rest api command to change the ownership of a saved search. Below is the command. I want to know corresponding SPL so that I can issue this query from Splunk web interface and change the o... See more...
I know the rest api command to change the ownership of a saved search. Below is the command. I want to know corresponding SPL so that I can issue this query from Splunk web interface and change the ownership. /servicesNS/user01/search/saved/searches/$_SAVED_SEARCH_NAME_$/acl -d owner=user02 -d sharing=$_SHARING_VALUE_$ Can anyone please help me here?
Hello, I have a WEC server which already collects logs from domain servers (http) and workgroup servers (https). As I am testing splunk as a SIEM I have installed a forwarder on that host which ... See more...
Hello, I have a WEC server which already collects logs from domain servers (http) and workgroup servers (https). As I am testing splunk as a SIEM I have installed a forwarder on that host which forward the "Forwarded Events" log. To map the ComputerName to the host in splunk I have added the following in inputs.conf at the forwarder level : host = WinEventLogForwardHost So far so good except for the workgroup host logs where the ComputerName is a obviously simple hostname instead of a fqdn. In this case, splunk put almost the full log in host until it finds a ".". So I guess that there is a process somewhere which parse the fqdn (until the first ".") to add only the server name in host. Moreover, this creates a lot of hosts in data summary. Is there a way to make splunk understand the Windows logs correctly? This should be out-of-the-box This one is correct This one is not correct and take the log until the first "." as host Thanks
I am looking to extract fields from some windows security events. Much of the data I need ends up being in the "message" section of the log due to the way Windows logs are formatted. See the examp... See more...
I am looking to extract fields from some windows security events. Much of the data I need ends up being in the "message" section of the log due to the way Windows logs are formatted. See the example below...ideally, each of the fields highlighted in yellow would be it's own field. Any ideas? Thank you!
Hi , We have 300 Queues which continually stores the data into Splunk every 5 mins. Each queue there is a Thresholdtime and Riskpoint and Message_in_Queue value.(Thresholdtime and Riskpoint -- C... See more...
Hi , We have 300 Queues which continually stores the data into Splunk every 5 mins. Each queue there is a Thresholdtime and Riskpoint and Message_in_Queue value.(Thresholdtime and Riskpoint -- Constant) Requirement Need to Generate dynamic alerts for Queue_Names if that Queue_Name contains Message_in_Queue value continually grater then Riskpoint Value in that Threshold Time. Example Data: For example here Queue_Name B Contains Message_in_Queue Value as 20000 which is greater than Riskpoint continually for 5 mins. So for B we need to raise the Alerts. Please, anyone, help me in this case as this is a complex scenario.
I have a test environment on my laptop. I get the following error: Unknown search command 'mycommand'. Details are: - Using Splunk Enterprise 8.0.1 on macOS running Mojave - Created a new... See more...
I have a test environment on my laptop. I get the following error: Unknown search command 'mycommand'. Details are: - Using Splunk Enterprise 8.0.1 on macOS running Mojave - Created a new app called python_sdk_app and revised permissions to “All apps” - Installed Splunk SDK 1.6.11 in bin folder of the app using ‘pip install -t . splunk-sdk’ - Created commands.conf inside the default directory of the app (also tried the local directory) - Restarted splunk commands.conf file: [mycommand] chunked=true filename=mycommand.py package locations: $ python -m site sys.path = [ '/Applications/Splunk/splunk-sdk-python-1.6.11', '/anaconda3/lib/python36.zip', '/anaconda3/lib/python3.6', '/anaconda3/lib/python3.6/lib-dynload', '/anaconda3/lib/python3.6/site-packages', '/anaconda3/lib/python3.6/site-packages/aeosa', '/anaconda3/lib/python3.6/site-packages/splunk_sdk-1.6.11-py3.6.egg', ] environment variables: SHELL=/bin/bash SPLUNK_HOME=/Applications/Splunk PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin:/anaconda3/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin PYTHONPATH=/Applications/Splunk/splunk-sdk-python-1.6.11
below average function is not giving me the correct value for last 30 days.Kindly advise | eval sTime=strptime(startTime,"%a %B %d %Y %H:%M:%S") | eval eTime=strptime(endTime,"%a %B %d %Y %H:%M... See more...
below average function is not giving me the correct value for last 30 days.Kindly advise | eval sTime=strptime(startTime,"%a %B %d %Y %H:%M:%S") | eval eTime=strptime(endTime,"%a %B %d %Y %H:%M:%S") | eval tTime=strptime(startTime,"%a %B %d %Y %H:%M:%S") | eventstats latest(STATUS) AS STATUS BY JOB | transaction JOB,startTime,endTime | eval e_Time=if(STATUS="TERMINATED" OR eTime
I'm trying to search a query and retrieve the results through REST API, and it returns zero results. Below queries, i am using to submit the search, check the status of the dispatch, and retrieve the... See more...
I'm trying to search a query and retrieve the results through REST API, and it returns zero results. Below queries, i am using to submit the search, check the status of the dispatch, and retrieve the results. https://splunk.corp.net:8089/servicesNS/admin/search/search/jobs https://splunk.corp.net:8089/services/search/jobs/{{sid}}?output_mode=json https://splunk.corp.net:8089/services/search/jobs/{{sid}}/results The same query executed from the Splunk UI, returns the results. Also, the UI returns results in a fraction of second, but in case of REST API, it takes more than ~5 minutes to complete the search (dispatchState = DONE) and returns zero results. Its is the same case for any search i do with REST API. Any idea, what am I doing wrong here?
Hello all, This question might have been already addressed but here it I would like to know which one is the best approach for monitoring the systems where Splunk has been deployed in an distribut... See more...
Hello all, This question might have been already addressed but here it I would like to know which one is the best approach for monitoring the systems where Splunk has been deployed in an distributed environment? What i would like to monitor for example, is who is accessing the systems via SSH or what files are being modified,... What I am thinking right now is installing a forwarder on each of the systems, point them to the deployment server and deploy the configuration from it but I was wondering if Splunk already does this by default or there is another better way to monitor this systems. Regards,
Many questions deal with indexed volume per source and per day for licence concern. My need is logs volume per source and per day. To explain more, I want to know the volume of each source without t... See more...
Many questions deal with indexed volume per source and per day for licence concern. My need is logs volume per source and per day. To explain more, I want to know the volume of each source without taking into account catching of old logs which could not be indexed in the past. So not the indexed volume per day but the volume of logs time stamped at a given day.
Hi We are looking to generate PDF form the results we have on a dashboard. We are using Selenium to log in and extract the PDF, however we have to be sure the dashboard is 100% done, before Seleni... See more...
Hi We are looking to generate PDF form the results we have on a dashboard. We are using Selenium to log in and extract the PDF, however we have to be sure the dashboard is 100% done, before Selenium goes to get the results. Is there a way to do this? Thanks in Advance Robert
Hello Community, We have a scheduled weekly report that used to take 30 mins, now takes more than 20 hrs to complete and when we inspected search job we could see few search components taking more ... See more...
Hello Community, We have a scheduled weekly report that used to take 30 mins, now takes more than 20 hrs to complete and when we inspected search job we could see few search components taking more duration. Is there any way out to identify what might be the root cause or what might have caused this search to run for loner duration? Any suggestions are appreciated! The search has completed and has returned 3,664 results by scanning 438,673,166,793 events in 73,635.129 seconds~20.4542025 hrs Duration (seconds) Component Invocations Input count Output count 205,560.16 command.search 4,020,759 7,580,746,540 9,809,797,326 256,005.83 command.tstats 2,990,397 39,155,807 39,159,471 255,889.30 dispatch.stream.remote 1,495,082 ----- 12,281,759,678 Following are the completion times Nov 4th - 35 mins Nov 11th - 35 mins Nov 18th - 30 mins Nov 25th - 35 mins Dec 2nd - 29 mins Dec 9th - 32 mins Dec 16th - 34 mins Dec 23rd - 36 mins Dec 30th - 35 mins Jan 6th - 35 mins Jan 13th - 7h 5m Jan 30th - 12h 58m Jan 27th - 18h 35m Feb 3rd - 35hr 45m
I have a problem with integration of Phantom with Active Directory. When I try to test connectivity with "Microsoft LDAP" app there is error with message: App 'LDAP' started successfully (id: 15809... See more...
I have a problem with integration of Phantom with Active Directory. When I try to test connectivity with "Microsoft LDAP" app there is error with message: App 'LDAP' started successfully (id: 1580900681102) on asset: 'pcb.lab'(id: 10) Loaded action execution configuration Ldap module initialized 1 action failed handle_action exception occurred. Error string: 'cannot concatenate 'str' and 'dict' objects'
Splunk Enterprise 7.2.0 I have my query: index="_itrospection" component ="hostwide" | timechart max(data.mem.mem_used) as Current by splunk_server In the legend I see the splunk_server ... See more...
Splunk Enterprise 7.2.0 I have my query: index="_itrospection" component ="hostwide" | timechart max(data.mem.mem_used) as Current by splunk_server In the legend I see the splunk_server descriptions based on hostnames. I created lookup indexers.csv: indexer,site hostname1,Site-1 hostname2,Site-2 How can I use lookup to replace splunk_server fields by lookup field site?
Hello, We have created some dashboards in the Splunk 7.0 and the feedback from users is good, but they have also some wishes I could not find solution for so far. One of them is to have a possi... See more...
Hello, We have created some dashboards in the Splunk 7.0 and the feedback from users is good, but they have also some wishes I could not find solution for so far. One of them is to have a possibility to expand a panel / chart visualization to the full-screen mode as there are quite some data presented there. Is it possible in any way? I mean one could jump into the search of the panel and then go to the visualization tab, but perhaps there is a better way for that I am not able to see ... Second is, that my enduser is utilizing the zoom option quite heavily in order to zoom into the time frame he is interesting in. What he does then is quite often to take a screenshot of the chart ... and he complains about the "Reset Zoom" image being there in the middle and it would appear in the screenshot. Is there any way to change the position of the "Reset Zoom" that it stays more in the corner? Third is, that we have quite long terms in the chart legend so they get cut. Is there any way to get them displayed in full when doing mouse over on the chart legend? Kind Regards, Kamil
Hi community, I'm not able to set up a filter for metrics indexes in role definition. According to manuals it should be enough to set a filter like: dimension_name=value but it doesn't w... See more...
Hi community, I'm not able to set up a filter for metrics indexes in role definition. According to manuals it should be enough to set a filter like: dimension_name=value but it doesn't work: all metrics are always returned despite the value I set. Thank you
Hi Splunkers, Any procedure how configure the splunk application. Our scenario is : Client has kerberos as the single sign on, we have to integrate kerberos with our apps? TIA
We have installed Tenable Add-on For Splunk, and configured it to connect to cloud.tenable.com with an API key. Our Splunk Enterprise is V 7.3.3 and Tenable Add-on is 3.1.0. The connection esta... See more...
We have installed Tenable Add-on For Splunk, and configured it to connect to cloud.tenable.com with an API key. Our Splunk Enterprise is V 7.3.3 and Tenable Add-on is 3.1.0. The connection establishes correctly, however doesn't appear to be downloading all the required information from Tenable. We get only tenable:io:plugin information ingested into our Splunk Index. I expect to also see tenable:io:vuln and tenable:io:assets, however it seems that the process for these two other pieces of information just doesn't occur. The only 'customisation' we've done is that we don't use the default index, we use a dedicated tenable index. We have updated the get_tenable_index macro to look for this new index. Increasing the logs to DEBUG and running another fetch, shows the system is connecting and reports it's exporting the vuln list, however still no information ingested into Splunk. Log output from the last run:: 2020-02-05 16:15:14,312 DEBUG pid=3980 tid=MainThread file=cli_common.py:cacheConfFile:345 | Preloading from 'C:\Program Files\Splunk\var\run\splunk\merged\server.conf'. 2020-02-05 16:15:14,315 DEBUG pid=3980 tid=MainThread file=cli_common.py:cacheConfFile:345 | Preloading from 'C:\Program Files\Splunk\var\run\splunk\merged\web.conf'. 2020-02-05 16:15:14,573 DEBUG pid=3980 tid=MainThread file=__init__.py:simpleRequest:480 | simpleRequest > GET https://127.0.0.1:8089/services/kvstore/status?output_mode=json [] sessionSource=direct timeout=30 2020-02-05 16:15:14,625 DEBUG pid=3980 tid=MainThread file=__init__.py:simpleRequest:511 | simpleRequest < server responded status=200 responseTime=0.0500s 2020-02-05 16:15:14,631 INFO pid=3980 tid=MainThread file=base_modinput.py:log_info:295 | Tenable.io data collection started for input: Tenable_Cloud_IO_Input 2020-02-05 16:15:14,631 INFO pid=3980 tid=MainThread file=base_modinput.py:log_info:295 | Tenable.io vulns:last_found data collection started 2020-02-05 16:15:14,631 DEBUG pid=3980 tid=MainThread file=base_modinput.py:log_debug:288 | Check point name is Tenable_Cloud_IO_Input_vulns_last_found 2020-02-05 16:15:14,631 INFO pid=3980 tid=MainThread file=splunk_rest_client.py:_request_handler:100 | Use HTTP connection pooling 2020-02-05 16:15:14,631 DEBUG pid=3980 tid=MainThread file=binding.py:get:678 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-tenable/storage/collections/config/TA_tenable_checkpointer (body: {}) 2020-02-05 16:15:14,632 DEBUG pid=3980 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): 127.0.0.1:8089 2020-02-05 16:15:14,638 DEBUG pid=3980 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:8089 "GET /servicesNS/nobody/TA-tenable/storage/collections/config/TA_tenable_checkpointer HTTP/1.1" 200 5326 2020-02-05 16:15:14,640 DEBUG pid=3980 tid=MainThread file=binding.py:new_f:73 | Operation took 0:00:00.008000 2020-02-05 16:15:14,640 DEBUG pid=3980 tid=MainThread file=binding.py:get:678 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-tenable/storage/collections/config/ (body: {'offset': 0, 'count': -1, 'search': 'TA_tenable_checkpointer'}) 2020-02-05 16:15:14,642 DEBUG pid=3980 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:8089 "GET /servicesNS/nobody/TA-tenable/storage/collections/config/?offset=0&count=-1&search=TA_tenable_checkpointer HTTP/1.1" 200 4514 2020-02-05 16:15:14,642 DEBUG pid=3980 tid=MainThread file=binding.py:new_f:73 | Operation took 0:00:00.003000 2020-02-05 16:15:14,644 DEBUG pid=3980 tid=MainThread file=binding.py:get:678 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-tenable/storage/collections/data/TA_tenable_checkpointer/Tenable_Cloud_IO_Input_vulns_last_found (body: {}) 2020-02-05 16:15:14,648 DEBUG pid=3980 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:8089 "GET /servicesNS/nobody/TA-tenable/storage/collections/data/TA_tenable_checkpointer/Tenable_Cloud_IO_Input_vulns_last_found HTTP/1.1" 404 140 2020-02-05 16:15:14,648 DEBUG pid=3980 tid=MainThread file=base_modinput.py:log_debug:288 | Check point state returned is None 2020-02-05 16:15:14,648 DEBUG pid=3980 tid=MainThread file=base.py:_request:446 | {"params": {}, "method": "POST", "url": "https://cloud.tenable.com/vulns/export", "body": {"filters": {"severity": ["medium", "high", "critical"], "state": ["open", "reopened"]}, "num_assets": "500"}} 2020-02-05 16:15:14,648 DEBUG pid=3980 tid=MainThread file=connectionpool.py:_new_conn:959 | Starting new HTTPS connection (1): cloud.tenable.com:443 2020-02-05 16:15:15,078 DEBUG pid=3980 tid=MainThread file=connectionpool.py:_make_request:437 | https://cloud.tenable.com:443 "POST /vulns/export HTTP/1.1" 200 None 2020-02-05 16:15:15,078 DEBUG pid=3980 tid=MainThread file=base.py:_request:476 | Request-UUID fbf3312ac715e21af4e3daf2c34d5aa6 for https://cloud.tenable.com/vulns/export 2020-02-05 16:15:15,079 DEBUG pid=3980 tid=MainThread file=exports.py:vulns:253 | Initiated vuln export ccbfe170-558e-44cd-bac2-c5d3319b3ebd 2020-02-05 16:15:15,079 DEBUG pid=3980 tid=MainThread file=base.py:_request:446 | {"params": {}, "method": "GET", "url": "https://cloud.tenable.com/vulns/export/ccbfe170-558e-44cd-bac2-c5d3319b3ebd/status", "body": {}} 2020-02-05 16:15:15,292 DEBUG pid=3980 tid=MainThread file=connectionpool.py:_make_request:437 | https://cloud.tenable.com:443 "GET /vulns/export/ccbfe170-558e-44cd-bac2-c5d3319b3ebd/status HTTP/1.1" 200 None 2020-02-05 16:15:15,293 DEBUG pid=3980 tid=MainThread file=base.py:_request:476 | Request-UUID 8b65774a8afef55e32a0ff615776445c for https://cloud.tenable.com/vulns/export/ccbfe170-558e-44cd-bac2-c5d3319b3ebd/status 2020-02-05 16:15:15,295 INFO pid=3980 tid=MainThread file=base_modinput.py:log_info:295 | Tenable.io vulns:last_found data collection completed 2020-02-05 16:15:15,295 INFO pid=3980 tid=MainThread file=base_modinput.py:log_info:295 | Tenable.io vulns:last_fixed data collection started 2020-02-05 16:15:15,295 DEBUG pid=3980 tid=MainThread file=base_modinput.py:log_debug:288 | Check point name is Tenable_Cloud_IO_Input_vulns_last_fixed 2020-02-05 16:15:15,295 DEBUG pid=3980 tid=MainThread file=binding.py:get:678 | GET request to https://127.0.0.1:8089/servicesNS/nobody/TA-tenable/storage/collections/data/TA_tenable_checkpointer/Tenable_Cloud_IO_Input_vulns_last_fixed (body: {}) 2020-02-05 16:15:15,298 DEBUG pid=3980 tid=MainThread file=connectionpool.py:_make_request:437 | https://127.0.0.1:8089 "GET /servicesNS/nobody/TA-tenable/storage/collections/data/TA_tenable_checkpointer/Tenable_Cloud_IO_Input_vulns_last_fixed HTTP/1.1" 404 140 2020-02-05 16:15:15,298 DEBUG pid=3980 tid=MainThread file=base_modinput.py:log_debug:288 | Check point state returned is None 2020-02-05 16:15:15,298 DEBUG pid=3980 tid=MainThread file=base.py:_request:446 | {"params": {}, "method": "POST", "url": "https://cloud.tenable.com/vulns/export", "body": {"filters": {"severity": ["medium", "high", "critical"], "state": ["fixed"]}, "num_assets": "500"}} 2020-02-05 16:15:15,486 DEBUG pid=3980 tid=MainThread file=connectionpool.py:_make_request:437 | https://cloud.tenable.com:443 "POST /vulns/export HTTP/1.1" 200 None 2020-02-05 16:15:15,486 DEBUG pid=3980 tid=MainThread file=base.py:_request:476 | Request-UUID a4a54d52658adbc0c74d9c0540862412 for https://cloud.tenable.com/vulns/export 2020-02-05 16:15:15,486 DEBUG pid=3980 tid=MainThread file=exports.py:vulns:253 | Initiated vuln export 35621064-7604-4fa4-839c-4b823a203c95 2020-02-05 16:15:15,486 DEBUG pid=3980 tid=MainThread file=base.py:_request:446 | {"params": {}, "method": "GET", "url": "https://cloud.tenable.com/vulns/export/35621064-7604-4fa4-839c-4b823a203c95/status", "body": {}} 2020-02-05 16:15:15,691 DEBUG pid=3980 tid=MainThread file=connectionpool.py:_make_request:437 | https://cloud.tenable.com:443 "GET /vulns/export/35621064-7604-4fa4-839c-4b823a203c95/status HTTP/1.1" 200 None 2020-02-05 16:15:15,693 DEBUG pid=3980 tid=MainThread file=base.py:_request:476 | Request-UUID 419dbcc8daa5ec7bf4376eed37849703 for https://cloud.tenable.com/vulns/export/35621064-7604-4fa4-839c-4b823a203c95/status 2020-02-05 16:15:15,694 INFO pid=3980 tid=MainThread file=base_modinput.py:log_info:295 | Tenable.io vulns:last_fixed data collection completed I've included only the vulns section as they all seem to report that same results (the plugin section is huge, and does actually import data correctly). Any ideas?