All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

sample log: {"date" : "2021-01-01 00:00:00.123 | dharam=fttc-pb-12312-esse-4 | appLevel=INRO | appName=REME_CASHE_ATTEMPT_PPI | env=sit | hostName=apphost000adc | pointer=ICFD | applidName=http.ab.w... See more...
sample log: {"date" : "2021-01-01 00:00:00.123 | dharam=fttc-pb-12312-esse-4 | appLevel=INRO | appName=REME_CASHE_ATTEMPT_PPI | env=sit | hostName=apphost000adc | pointer=ICFD | applidName=http.ab.web.com|news= | list=OUT_GOING | team=norpass | Category=success | status=NEW | timeframe=20", "tags": {"host": "apphost000adc" , "example": "6788376378jhjgjhdh2h3jhj2", "region": null, "resource": "add-njdf-tydfth-asd-1"}} used below regex to extract all fields  , but  one field is not getting extracted, that is timeframe |regex  _raw= (\w+)\=(.+?) \| how to modify my regex to extract timeframe field as well.
Hi all, getting to grips with SPL and would be forever grateful if someone could lend their brain for the below:   I've got the lookup in the format below: (Fields) -->  host, os, os version ... See more...
Hi all, getting to grips with SPL and would be forever grateful if someone could lend their brain for the below:   I've got the lookup in the format below: (Fields) -->  host, os, os version ----------------------------------------- (Values) ---> Server01, Windows, Windows Server 2019   But in my case, this lookup has 3000 field values, I want to know their source values in Splunk (This lookup was generated by a match condition with another, so I KNOW that these hosts are present in my Splunk env)   I basically need a way to do the following:   "| tstats values(sources) where index=* host=(WHATEVER IS IN MY LOOKUP HOST FIELD) by index, host"   But i can't seem to find a way, I did try to originally match the below:   | tstats values(source) where index=* by host, index | join type=inner host | [|inputlookup mylookup.csv | fields host | dedup host]   But my results were too large to handle by Splunk, plz help
My environment consists of 1 search head, 1 manager, and 3 indexers. I added another search head so that I can put enterprise security on it but when I run any search i get this error.  (only rea... See more...
My environment consists of 1 search head, 1 manager, and 3 indexers. I added another search head so that I can put enterprise security on it but when I run any search i get this error.  (only reason i did index=* was to show that ALL indexes are like this and no matter what I search this happens. What I'm the most confused about is why is the bottom portion (where the search results are) greyed out and I cant interact with it.  Here's the last few lines from the search.log if more is required i can send more of the log. The log is just really long. 04-03-2024 18:00:38.937 INFO SearchStatusEnforcer [11858 StatusEnforcerThread] - sid=1712181568.6, newState=BAD_INPUT_CANCEL, message=Search auto-canceled 04-03-2024 18:00:38.937 ERROR SearchStatusEnforcer [11858 StatusEnforcerThread] - SearchMessage orig_component=SearchStatusEnforcer sid=1712181568.6 message_key= message=Search auto-canceled 04-03-2024 18:00:38.937 INFO SearchStatusEnforcer [11858 StatusEnforcerThread] - State changed to BAD_INPUT_CANCEL: Search auto-canceled 04-03-2024 18:00:38.945 INFO TimelineCreator [11862 phase_1] - Commit timeline at cursor=1712168952.000000 04-03-2024 18:00:38.945 WARN DispatchExecutor [11862 phase_1] - Execution status=CANCELLED: Search has been cancelled 04-03-2024 18:00:38.945 INFO ReducePhaseExecutor [11862 phase_1] - Ending phase_1 04-03-2024 18:00:38.945 INFO UserManager [11862 phase_1] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.948 INFO UserManager [11858 StatusEnforcerThread] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.950 INFO DispatchManager [11855 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1712181568.6', username='b.morin') 04-03-2024 18:00:38.950 INFO UserManager [11855 searchOrchestrator] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.950 ERROR ScopedAliveProcessToken [11855 searchOrchestrator] - Failed to remove alive token file='/opt/splunk/var/run/splunk/dispatch/1712181568.6/alive.token'. No such file or directory 04-03-2024 18:00:38.950 INFO SearchOrchestrator [11852 RunDispatch] - SearchOrchestrator is destructed. sid=1712181568.6, eval_only=0 04-03-2024 18:00:38.952 INFO UserManager [11861 SearchResultExecutorThread] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.961 INFO SearchStatusEnforcer [11852 RunDispatch] - SearchStatusEnforcer is already terminated 04-03-2024 18:00:38.961 INFO UserManager [11852 RunDispatch] - Unwound user context: b.morin -> NULL 04-03-2024 18:00:38.961 INFO LookupDataProvider [11852 RunDispatch] - Clearing out lookup shared provider map 04-03-2024 18:00:38.962 INFO dispatchRunner [10908 MainThread] - RunDispatch is done: sid=1712181568.6, exit=0  
I am trying to exclude this from a search. They are almost all the same just the sshd instance changes can someone help me exclude? ras1-dan-cisco-swi error: PAM: Authentication failure for illegal ... See more...
I am trying to exclude this from a search. They are almost all the same just the sshd instance changes can someone help me exclude? ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[17284] ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[29461] ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[4064] ras1-dan-cisco-swi error: PAM: Authentication failure for illegal user djras123 from 192.168.1.2 - dcos_sshd[9450] Thanks guys besides excluding each one,
I've tried using html codes like <p> or <b>test</b> and it makes no difference.  I'd like to format a much more complete summary of the event that's more thorough, human readable, and better formatte... See more...
I've tried using html codes like <p> or <b>test</b> and it makes no difference.  I'd like to format a much more complete summary of the event that's more thorough, human readable, and better formatted.  is there a way to do this?
I'm currently running Splunk 9.1.3 enterprise and Splunk DB Connect 3.16. When logging into Splunk I receive this error message in DB Connect that states, "Can not communicate with task server, check... See more...
I'm currently running Splunk 9.1.3 enterprise and Splunk DB Connect 3.16. When logging into Splunk I receive this error message in DB Connect that states, "Can not communicate with task server, check your settings." I made sure it was in the correct path in dbx_settings.conf and customized.java.path as well. Any suggestions would help.
Splunk Universal Forwarder upgrade to 9.1.3 is failing with Copy Error "Setup can not copy the file SplunkMonitor NoHandleDrv.sys".  Attached the error message
How do a get a count of rows that have a value greater than 0? Example below. The last column is what we are trying to generate. Name 2024-02-06 2024-02-08 2024-02-13 2024-02-15 Count_Of... See more...
How do a get a count of rows that have a value greater than 0? Example below. The last column is what we are trying to generate. Name 2024-02-06 2024-02-08 2024-02-13 2024-02-15 Count_Of_Rows_with_Data Pablo 1 0 1 0 2 Eli 0 0 0 0 0 Jenna 1 0 0 0 1 Chad 1 0 5 0 2  
I ran a |REST search to export the list of savedsearches along with their cronjob schedules.  The cronjob scheduled are not showing the time in UTC time. ex | REST output for a search shows cronjob ... See more...
I ran a |REST search to export the list of savedsearches along with their cronjob schedules.  The cronjob scheduled are not showing the time in UTC time. ex | REST output for a search shows cronjob of 10 14 * * *,  but when I look at the REPORT tab on the SHC and see the list of saved searches, the "Next Scheduled Time" column shows 2024-04-07 18:10:00 UTC My SHC and deployers splunk servers are both set to UTC as the default system time.  On the SHC UI, when I log in, my preferences are also set to view data in "default system time".  I am physically located in an Eastern Time Zone. I am trying to see how to fix this so the |REST output of saved searches and their cronjob schedule is in UTC.
is it possible to have expression in case command for argument Y? case(x,y) |eval test=case(x=="X", 'a+b')  The Y argument, instead of a strings or number, can it be an expression like field a + f... See more...
is it possible to have expression in case command for argument Y? case(x,y) |eval test=case(x=="X", 'a+b')  The Y argument, instead of a strings or number, can it be an expression like field a + field b?   Thanks    
Is it possible for the next version of the add-on to add MS defender vulnerabilty API calls to this add-on? Currently there is only "Microsoft defender for incident" and "Microsoft defender endpoint ... See more...
Is it possible for the next version of the add-on to add MS defender vulnerabilty API calls to this add-on? Currently there is only "Microsoft defender for incident" and "Microsoft defender endpoint alert".  We need another one add for "Microsoft Defender for Vulnerabilities" ---- Here's the API's below --- Permissions needed Collected data API call Permission needed Machine info GET https://api.securitycenter.microsoft.com/api/machines Machine.Read.All Full export of vulnerabilities GET https://api.securitycenter.microsoft.com/api/machines/SoftwareVulnerabilitiesExport Vulnerability.Read.All Delta export of vulnerabilities GET https://api.securitycenter.microsoft.com/api/machines/SoftwareVulnerabilityChangesByMachine Vulnerability.Read.All Description of vulnerabilities POST https://api.security.microsoft.com/api/advancedhunting/run AdvancedHunting.Read.All   https://github.com/thilles/TA-microsoft-365-defender-threat-vulnerability-add-on?tab=readme-ov-file#resources 
Hi team, I am following the below instructions to bring Genesys cloud logs in to splunk  https://splunkbase.splunk.com/app/6552 Under the details and intsallation instruction of the app, I cant fin... See more...
Hi team, I am following the below instructions to bring Genesys cloud logs in to splunk  https://splunkbase.splunk.com/app/6552 Under the details and intsallation instruction of the app, I cant find the configuration and it also did not prompted me for the input configuration
I am trying to determine a hosts percent of time it logs to splunk within a summary index we created. We have an index called "summary_index" and a field called "host_reported" that shows if a host h... See more...
I am trying to determine a hosts percent of time it logs to splunk within a summary index we created. We have an index called "summary_index" and a field called "host_reported" that shows if a host has been seen in the past hour.  Here is the search i am using to see all hosts in the summary index that were seen within the last 24hrs: index=summary_index | stats count by host_reported What i am trying to do is develop a search that shows me what percent of the time over the past 7 days each host has reported to this summary index. So for example if host A only reported to the summary index 6 of the 7 days, i want it to show it's "up time "was 86% for the past 7 days. 
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).   This pertains to the Max Lines setting in the Form... See more...
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).   This pertains to the Max Lines setting in the Format section of the Search and Reporting App. Previously, Splunk defaulted to displaying 20 or more lines in search results within the Search and Reporting App. As an administrator responsible for extracting Splunk logs across various applications over the years, I never encountered the need to expand brief search results, to read all lines. However, in the recent weeks, possibly following an upgrade of the Splunk Search heads, I've observed that each time I open a new Splunk search window or the existing Splunk tab times out and auto-refreshes, the Format > Max Lines option is reset to 5. Consequently, I find myself changing it after nearly every search, which has become cumbersome. Therefore, Kindly guide me on how to change  the default value to 20 from 5 in the Search and Reporting App on both Search heads? This adjustment would alleviate the challenge faced by most of our customers and end-users who find it cumbersome to modify it for each search. So kindly help on my requirement.
I have an issue with ES not showing all of the views depending on which user is logged in.  Is there a location for permissions of the views?  For example, if I am logged in as a Splunk Admin I can s... See more...
I have an issue with ES not showing all of the views depending on which user is logged in.  Is there a location for permissions of the views?  For example, if I am logged in as a Splunk Admin I can see all of the views: As a ESS admin I see: Most important is not having the incident review one there.  When I go to Configure/All Configurations/General /Navigation as the ESS admin, all of the views are shown for me to move around and configure.  The ribbon remains the same.   Where should I look for what is different?    
Requesting help with search query. I have application logs in Splunk like, 2024-04-02T12:26:02.244-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{},Creating... See more...
Requesting help with search query. I have application logs in Splunk like, 2024-04-02T12:26:02.244-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{},Creating JNDI naming context 2024-04-02T12:26:02.118-04:00,severity=DEBUG,thread=main,logger=org.apache.catalina.core.NamingContextListener,{}, Adding resource ref UserDatabase ResourceRef[className=org.apache.catalina.UserDatabase,factoryClassLocation=null,factoryClassName=org.apache.naming.factory.ResourceFactory,{type=description,content=User database that can be updated and saved},{type=scope,content=Shareable},{type=auth,content=Container},{type=singleton,content=true},{type=factory,content=org.apache.catalina.users.MemoryUserDatabaseFactory},{type=pathname,content=conf/tomcat-users.xml}] And I'm using following query to separate different sections of the message, index=my_app_index AND source="**/my-app-service.log" AND sourcetype="app_v1"|rex="(?<mydatetime>^\S*)\,severity=(?<severity>\S*)\,thread=(?<thread>\S*)\,logger=(?<logger>\S*)\,\{\}\,(?<logmsg>(.)*)"|table mydatetime,logger,thread,_raw,logmsg|rename logmsg AS MESSAGE What I see is, column mydatetime and logmsg(MESSAGE) are empty. What I expect is, column mydatetime contain initial date-time, and logmsg(MESSAGE)  contain the last message part mydatetime logger thread logmsg 2024-04-02T12:26:02.244-04:00 org.apache.catalina.core.NamingContextListener main Creating JNDI naming context 2024-04-02T12:26:02.118-04:00 org.apache.catalina.core.NamingContextListener main Adding resource ref UserDatabase ResourceRef[className=org.apache.catalina.UserDatabase,factoryClassLocation=null,factoryClassName=org.apache.naming.factory.ResourceFactory,{type=description,content=User database that can be updated and saved},{type=scope,content=Shareable},{type=auth,content=Container},{type=singleton,content=true},{type=factory,content=org.apache.catalina.users.MemoryUserDatabaseFactory},{type=pathname,content=conf/tomcat-users.xml}]
When writing plain text in the Next Steps field of a notable event such as Mitre ATT&CK it is then shown, when the notable is created, as Mitre ATT&amp;CK which is clearly incorrect. Is it possible t... See more...
When writing plain text in the Next Steps field of a notable event such as Mitre ATT&CK it is then shown, when the notable is created, as Mitre ATT&amp;CK which is clearly incorrect. Is it possible to escape the & character is some way ?   This also happens when using action:url too - [[action|url:Mitre ATT&CK ]]  is shown as Mitre ATT&amp;CK  Any help would be appreciated.
curl -k -u svc_aas -d search="search index=aas sourcetype=syslog" https://splunk-prod-api.internal.xxxx.com/services/search/jobs     I want to run this using Postman can someone help me frame the Po... See more...
curl -k -u svc_aas -d search="search index=aas sourcetype=syslog" https://splunk-prod-api.internal.xxxx.com/services/search/jobs     I want to run this using Postman can someone help me frame the Postman queries to search and retrieve Splunk logs
Hi all, I am currently testing the Http Event Collector (HEC) with a Splunk Cloud trial account. All I do is post data to the HEC url, and It works perfectly for a local instance for an Enterprise a... See more...
Hi all, I am currently testing the Http Event Collector (HEC) with a Splunk Cloud trial account. All I do is post data to the HEC url, and It works perfectly for a local instance for an Enterprise account at http://127.0.0.1:8088/services/collector/event A solution I saw on the community forum was to disable the SSL validation. However, this isn't the best option to use in production for security reasons. Another Solution I saw was to upload certificates but this option isn't suited for a SaaS solution with many different customers. Is it possible to solve this issue in a different way? And I would also like to ask if this problem would persist for normal production client accounts and along with a generic solution for it?     Curl requests   curl https://prd-p-xxxxx.splunkcloud.com:8088/services/collector/event -H "Authorization: Splunk token" -d '{"event": "hello world"}'     Curl Response   curl: (60) SSL certificate problem: self signed certificate in certificate chain More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.     Thank you for your time and assistance in addressing these inquiries. 
I am sending logs from application to splunk server by Splunk logging for java using Http Event Collector with log4j2 configurations. Actually logs are printed correctly in console but not getting ... See more...
I am sending logs from application to splunk server by Splunk logging for java using Http Event Collector with log4j2 configurations. Actually logs are printed correctly in console but not getting pushed to splunk server. And I am not evening getting any Error. Below is my log4j2.xml configuration file <?xml version="1.0" encoding="UTF-8"?> <Configuration status="info" name="example" packages="org.example"> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{IS08661}} %highlight{%-5level }[%style{%t}{bright, blue}] %style{%C{10}}{bright,yellow): %msg%n%throwable" /> </Console> <File name="MyFile" fileName="logs/app.log"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> </File> <SplunkHttp name="httpconf" url="http://localhost:8088" token="b489e167-d96d-46ec-922f-6b25fc83f199" host="localhost" index="spring_dev" source="source name" sourcetype="log4j" messageFormat="text" disableCertificateValidation="true"> <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <Root level="info"> <AppenderRef ref="console" /> <AppenderRef ref="MyFile"/> <AppenderRef ref="httpconf" /> </Root> </Loggers> </Configuration>