All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We recently move to S2 and our initial retention was set to 6 months. A month after the migration we decided to reduced the retention to 3 months but did not see any reduction in storage in s3. Supp... See more...
We recently move to S2 and our initial retention was set to 6 months. A month after the migration we decided to reduced the retention to 3 months but did not see any reduction in storage in s3. Support found out that versioning was enabled in AWS by the PS engineer during the migration and that caused this issue. Updated this in indexes.conf remote.s3.supports_versioning =false" Now the new data which is rolled over is deleted but we still have old cruft remaining in s3 which is costing us heavily. Support wants us to delete the data manually by running commands from CLI. Isnt there a better way in doing this? Does AWS lifecycle rules work only for old data which is still lying there? What are the ways to get rid of this old data apart from removing it manually. 
Hi   Actualy I trying to search data even the archived ones but as you can see in printscreen below I get only the 3 last month, because I think the data older than 3 months was archived.   C... See more...
Hi   Actualy I trying to search data even the archived ones but as you can see in printscreen below I get only the 3 last month, because I think the data older than 3 months was archived.   Could you explain me how to retrieve data older than 3 month in my case.   Regards
Hi, Is there any app in Splunk base to analyze the logs in my Splunk ES to stop the unwanted logs ingestion ? Thanks
Hi, Can anyone pls figure out from these  list of apps which of these apps from web logs are not required for investigation/needed for ingesting in to Splunk to save the license cost ? ssl windows... See more...
Hi, Can anyone pls figure out from these  list of apps which of these apps from web logs are not required for investigation/needed for ingesting in to Splunk to save the license cost ? ssl windows-remote-management web-browsing sap ms-office365-base google-base soap new-relic okta ms-onedrive-base windows-push-notifications dns-over-tls crowdstrike dns-over-https outlook-web-online ms-store paloalto-updates websocket apple-push-notifications gmail-base yahoo-web-analytics whatsapp-web naver-line hotmail http-proxy adobe-creative-cloud-base telegram-base ocsp pan-db-cloud windows-azure-base github-base apple-update deepl-base slack-base egnyte-base teamviewer-base google-meet facebook-chat concur-base google-docs-base qlikview paloalto-wildfire-cloud successfactors reddit-base bananatag google-analytics as2 cisco-spark-base viber-base jabber google-chat taobao appdynamics icloud-mail cloudinary-base zoom-base imgur-base webdav splashtop-remote zscaler-internet-access google-drive-web ms-onedrive-business liveperson discord salesforce-base tokbox quora-base paloalto-dns-security giphy-base vimeo-base giphy-downloading notion-base webex-base openai-base paloalto-cloud-identity zendesk-base paloalto-logging-service dailymotion paloalto-prisma-sdwan-control paloalto-shared-services cloudflare-warp sharepoint-online facebook-video   Thanks
I'm thinking of running a script(.BAT file) with an action in the report schedule. However, when I specify a batch file for the script and run it, but the script is repeatedly executed the same numb... See more...
I'm thinking of running a script(.BAT file) with an action in the report schedule. However, when I specify a batch file for the script and run it, but the script is repeatedly executed the same number of times as the number of search results. I want to set the script execution within the report schedule to once, regardless of the search results. What settings should I make? (ex. Advanced Edit properties)
Hello comrades, After my poor research, I found that only heavy forwarder supports props.conf, but it was like 5 or 6 years old posts. I wonder that UF could now support props.conf? Also how do I up... See more...
Hello comrades, After my poor research, I found that only heavy forwarder supports props.conf, but it was like 5 or 6 years old posts. I wonder that UF could now support props.conf? Also how do I upgrade to HF? Many thanks,  
I'm trying to extract all the CVEs and associated their CVSS scores from Shodan's API (JSON response). The response is typically in the format where the number after data depends on the number of ser... See more...
I'm trying to extract all the CVEs and associated their CVSS scores from Shodan's API (JSON response). The response is typically in the format where the number after data depends on the number of services detected, example data: data :[ 0 22/tcp/OpenSSH :{ … }, 1 80/tcp/Apache httpd :{ vulns :{ "CVE-2013-6501" :{ cvss :4.6, references :[ … ], summary :"The default soap.wsdl_cache_dir setting in (1) php.ini-production and (2) php.ini-development in PHP through 5.6.7 specifies the /tmp directory, which makes it easier for local users to conduct WSDL injection attacks by creating a file under /tmp with a predictable filename that is used by the get_sdl function in ext/soap/php_sdl.c.", Current search: | curl method=get uri=https://api.shodan.io/shodan/host/"IP"?key=APIKEY | spath input=curl_message path="data{0}.vulns" output=test_array | mvexpand test_array | spath input=test_array | table CVE*.cvss When using curl from WebTools, spath doesn't appear to be extracting all the fields (e.g. only 4 of the 15 CVEs are displayed in the table), likely because of the 5000 character limit for spath. Is there another method that would be able to keep data like the CVE, cvss and summary linked while splitting the data? Delim via comma seems like it wouldn't be possible since the summaries also include commas.
on isolated network/no internet access considered a transport network.  Is there a way to still use add ons/ apps?  
Tomorrow is CX Day and we are so excited to be able to shine the spotlight on all things customer experience. At Splunk, we truly believe serving our customers is the bedrock of our business, and tha... See more...
Tomorrow is CX Day and we are so excited to be able to shine the spotlight on all things customer experience. At Splunk, we truly believe serving our customers is the bedrock of our business, and that building resilience helps transform the customer experience to drive better business outcomes. Learn more about CX Day and dig into some awesome content we’ve created just for the occasion here. Then join us over on LinkedIn Live tomorrow at 11 AM PT / 2 PM ET as Splunk's Chief Customer Officer, Toni Pavlovich, chats with CX influencer, Blake Morgan. It'll be a conversation full of learning, fun, and celebration.   What does Customer Experience mean to you? Have an example of a particularly amazing customer experience? Drop your thoughts below!
Hello All, I  am calculating burnrate in splunk,  and using addinfo for enrichment to display it on the dashboard. Burnrate is getting calculated but previous day burnrate is not getting stored whi... See more...
Hello All, I  am calculating burnrate in splunk,  and using addinfo for enrichment to display it on the dashboard. Burnrate is getting calculated but previous day burnrate is not getting stored which splunk could refer. I am running the report and pushing the values to it using outputlookup command, &  from there below script is reading it. In Dev environment its working fine, but when  I am moving to production its breaking, the values are getting calculated but nit getting saved and burnrate values are not getting connected as per below graph. Here is the script | inputlookup append=t lkp_add.csv | addinfo | eval timestamp=if(isnull(timestamp),round(strptime(date + " 23:59:59","%Y-%m-%d %H:%M:%S"), 0), timestamp), threshold=1 | where desc="add info" and timestamp>=(now()-(info_max_time-info_min_time)) | stats max(burnrate) as burnrate max(threshold) as threshold by timestamp | eval _time=strptime(timestamp,"%s") | timechart max(burnrate) as burnrate max(threshold) as threshold      
Hi Splunk Team,  The Documentation says UF default installation port is 8989 (https://docs.splunk.com/Documentation/Forwarder/9.1.0/Forwarder/Installanixuniversalforwarder ) (from 9.1.0... before 9... See more...
Hi Splunk Team,  The Documentation says UF default installation port is 8989 (https://docs.splunk.com/Documentation/Forwarder/9.1.0/Forwarder/Installanixuniversalforwarder ) (from 9.1.0... before 9.1.0, that default port information is not updated in that page at all)   May i know if its a typo and it should have been 8089.. please suggest, thanks. 
I install Splunk on Ubuntu and installed Splunk app called Cisco eStreamer client. How can I fix the issue? I configure Cisco Firepower Management Center and Splunk according to this video. https... See more...
I install Splunk on Ubuntu and installed Splunk app called Cisco eStreamer client. How can I fix the issue? I configure Cisco Firepower Management Center and Splunk according to this video. https://www.youtube.com/watch?v=pEXM5PVkvH8&t=104s&ab_channel=CiscoSecureFirewall I got an error: root@platform-dns:/opt/splunk/etc/apps/TA-eStreamer/bin/encore# ../splencore.sh test Traceback (most recent call last): File "./estreamer/preflight.py", line 33, in <module> import estreamer.crossprocesslogging File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/__init__.py", line 27, in <module> from estreamer.connection import Connection File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/connection.py", line 23, in <module> import ssl File "/opt/splunk/lib/python3.7/ssl.py", line 98, in <module> import _ssl # if we can't import it, let the error propagate ImportError: /opt/splunk/lib/python3.7/lib-dynload/_ssl.cpython-37m-x86_64-linux-gnu.so: undefined symbol: SSL_state
HI Community, I have been tasked with getting AWS Cloudtrail logs into Splunk. I have spent some time not just reading how to accomplish this but also testing it on my own AWS environment. The org t... See more...
HI Community, I have been tasked with getting AWS Cloudtrail logs into Splunk. I have spent some time not just reading how to accomplish this but also testing it on my own AWS environment. The org that I work for uses control tower (not on the current version) to provide landing zones. If you know anything about the control tower, it basically provisions accounts on your behalf and sets up guardrails for ease of scalability. One account that is provisioned is name log archive which I am interested in.  My question is, would I access this archiving account and setup a cloudwatch group and kinesis firehose stream? Or do I need to access the logs in this archive logging account from another account? Maybe I am not asking this question correctly but it seems like the control tower makes log aggregation easier but also complicates how to access the logs.  Let me know if clarification is needed. Thanks!
Is it possible to modify the value of a token obtained from a dashboard input prior to it being used in a panel? In the scenario that I have a domain value is input to have various searches executed ... See more...
Is it possible to modify the value of a token obtained from a dashboard input prior to it being used in a panel? In the scenario that I have a domain value is input to have various searches executed on it. Sometimes the domain is provided to the users in a "sanitized" format to avoid clicking of links. The "." is replaced with "[.]". I want to give the users the option of inputting domains in either format, sanitized or not, and having the token value rewritten to remove the square brackets, something akin to | replace "[.]" WITH "." IN $domain$ The dashboard was created in the Classic format. I have been unable to figure out how I might modify the dashboard source to eval or modify the value into the consistent formatting. One of the things I tried was to add an <eval> tag in the source to evaluate the token into a new token value and leverage a replace command to modify it in the process but got a message stating "Invalid child="eval" is not allowed in node="dashboard"" So if an <eval> tag is the solution I am not sure where to put it. Does anyone have insight on how I might achieve this token modification cleanly?
Hello, I've been tasked with having the results of a playbook show up as a note in a different phase. Any instruction or ideas welcome. Thanks so much.
Hi all, I searched my issue on community. There are lots of threads but i couldn't find my issue. As i know i can not see 2 event ID's fields (both of them) in same search because fields are differe... See more...
Hi all, I searched my issue on community. There are lots of threads but i couldn't find my issue. As i know i can not see 2 event ID's fields (both of them) in same search because fields are different. I want to see 2 different event ID's fields in same search. My issue is bit complicated. For this reason i will explain with basic fields and i will change later. First search: index=wineventlog EventID=1 process_name=chrome.exe | stats count by Image process_name process_path CommandLine Second search: index=wineventlog EventID=3 DestinationHostname=google.com | stats count by Image SourceIP DestinationIP DestinationHostname   I want join these 2 searches in same search and i want to see 2 different event ID' s fields in same search. I found join command but i couldn't figure out how to use that. Any help would be appreciated!
Have following data in the logfile    {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC5"} {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC6","account":"verified"} {xxxx},{"GUID":"5561859B8D624FFC8F... See more...
Have following data in the logfile    {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC5"} {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC6","account":"verified"} {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC7","account":"unverified"} {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC8","account":"verified"} {xxxx},{"GUID":"5561859B8D624FFC8FF0B87219060DC9"}   Need Report like the following so that I should get the count of "verified" where it is explicitly mentioned otherwise it should show under "unverified" -   Type Count Verified 2 Unverified 3   How can we achieve this. Will appreciate your inputs!
Im trying to break out the comma separated values in my results but im brain farting. I want to break out the specific reasons - {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=NEGATIVE, New ... See more...
Im trying to break out the comma separated values in my results but im brain farting. I want to break out the specific reasons - {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=NEGATIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} index="okta" actor.alternateId="*mydomain*" outcome.reason=*CHALLENGE* client.geographicalContext.country!="" actor.displayName!="Okta System" AND NOT "okta_svc_acct" | bin _time span=45d | stats count by outcome.reason, debugContext.debugData.behaviors | sort -count outcome.reason debugContext.debugData.behaviors Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=NEGATIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=NEGATIVE, New IP=NEGATIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=POSITIVE, New IP=POSITIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=POSITIVE, New Device=POSITIVE, New IP=POSITIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=POSITIVE} Sign-on policy evaluation resulted in CHALLENGE {New Geo-Location=NEGATIVE, New Device=NEGATIVE, New IP=POSITIVE, New State=NEGATIVE, New Country=NEGATIVE, Velocity=NEGATIVE, New City=NEGATIVE}
Hi to all, I'm a newbee in Splunk and I need to check If the Splunk Cloud is receiving traffic form our network infrastructure. I have thought to do via API request but I don't find the url where to... See more...
Hi to all, I'm a newbee in Splunk and I need to check If the Splunk Cloud is receiving traffic form our network infrastructure. I have thought to do via API request but I don't find the url where to do the request. Could anybody to send me where I can find documentation to do this??? Or how can I do this?? Thanks in advance!! David.
Hello Im trying to run a chart command grouped by 2 fields but im getting an error: this is my query :   | chart values(SuccessRatioBE) as SuccessRatioBE over _time by UserAgent LoginType ... See more...
Hello Im trying to run a chart command grouped by 2 fields but im getting an error: this is my query :   | chart values(SuccessRatioBE) as SuccessRatioBE over _time by UserAgent LoginType   and im getting this error : "Error in 'chart' command: The argument 'LoginType' is invalid." I also tried with comma to separate between the fields and ticks also