All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This ^ is sample xml log file that I want to onboard. Please guide me about the settings which I should set in order to properly input this data. Also tell me on which instances the settings (pro... See more...
This ^ is sample xml log file that I want to onboard. Please guide me about the settings which I should set in order to properly input this data. Also tell me on which instances the settings (props.conf and transforms.conf) are required. I am running a Distributed system with indexer clustering. 
Hi all,   Does this Add-On support Azure Certificate Based Authentication? The documentation seems to have steps for using Client ID + Client Secret, and doesn't mention Certificate Based Authentic... See more...
Hi all,   Does this Add-On support Azure Certificate Based Authentication? The documentation seems to have steps for using Client ID + Client Secret, and doesn't mention Certificate Based Authentication, but wanted to double check. Thanks, Chris
Hi, I have installed and configured Palo Alto Addon which is creating multiple eventtypes , one of which is pan_traffic_start which I believe are the session start logs. I want to remove these parti... See more...
Hi, I have installed and configured Palo Alto Addon which is creating multiple eventtypes , one of which is pan_traffic_start which I believe are the session start logs. I want to remove these particular event types from the logs so that the license can be saved since much value are not there from these logs. Can someone please help me in filtering out this eventype so that they will not reach indexers or search headers. 
Hello, I have a query that return results if im running it for 1 hour but if im trying to run the query for more than 1 our it returns no result..  index=clientlogs sourcetype=clientlogs Mode=Real ... See more...
Hello, I have a query that return results if im running it for 1 hour but if im trying to run the query for more than 1 our it returns no result..  index=clientlogs sourcetype=clientlogs Mode=Real ApplicationIdentifier="*" "orders-for-open" (Action="OpenPositionRequest" AND Level=Info) | eval StartTime=strptime(ClientDateTime,"%Y-%m-%dT%H:%M:%S.%3N") | rename Request_Id AS RequestId | stats min(StartTime) as StartTime min(_time) AS _time BY RequestId | join RequestId [ search index=clientlogs sourcetype=clientlogs Mode=Real ApplicationIdentifier="*" Message="Create OrderForOpen" | rename OrderID AS PushEventData_Position_OrderID ] | join PushEventData_Position_OrderID [ search index=clientlogs sourcetype=clientlogs Mode=Real ApplicationIdentifier="*" (Message="Position.Open" AND (PushEventData_Position_OrderType=17 OR PushEventData_Position_OrderType=18)) | eval finishTime=strptime(ClientDateTime,"%Y-%m-%dT%H:%M:%S.%3N") | stats min(finishTime) as finishTime min(_time) AS _time BY PushEventData_Position_OrderID ] | eval Latency=finishTime-StartTime | where Latency>0 | timechart avg(Latency) span=1m
Hi,   I'm trying to figure out how to get data for the past few weeks and data will be filtered. week start should be from every (previous week)Saturday to Friday. I will send a report every ... See more...
Hi,   I'm trying to figure out how to get data for the past few weeks and data will be filtered. week start should be from every (previous week)Saturday to Friday. I will send a report every Friday. the report should be like this DATE           COUNT    NAME 21-01-22      58             one 14-01-22      58             one 07-01-22      45             two Thus on next Friday one more value-added to report. DATE           COUNT    NAME 28-01-22      61             one 21-01-22      58             one 14-01-22      58             one 07-01-22      45             two @ITWhisperer  @gcusello 
Hi, all! I wish to display the event without the fields like "host", "source", and "sourcetype" like the photo below on my dashboard. But when I save it as a dashboard, it still shows these fie... See more...
Hi, all! I wish to display the event without the fields like "host", "source", and "sourcetype" like the photo below on my dashboard. But when I save it as a dashboard, it still shows these fields!  How could I solve the problem?
Hi, We are having issues integrating full compatibility of Splunk Enterprise alerts in Opsgenie. The current Splunk app for opsgenie is not editable like slack or e-mail where you can choose what to... See more...
Hi, We are having issues integrating full compatibility of Splunk Enterprise alerts in Opsgenie. The current Splunk app for opsgenie is not editable like slack or e-mail where you can choose what to capture directly from it. This is somewhat limiting our delivery of alerts and making them less dynamic. The fields captured by opsgenie do not have the critical component that we would like to hve, i.e MESSAGE. To give you a bit of insight, our team is a 24x7 NOC that should receive Splunk alerts forwarded into Opsgenie and the alert must contain free text input related to triage steps and confluence links. I would like to know if there are other alternatives in Splunk for example to concatenate free text in a splunk search query that can be captured by opsgenie current setup, for example: Base query index=*titanic*   and   Free Text Query index=*titanic* | It doesn't end well   In the latter example, I want to make splunk concatenate the text to the search where i can append it to an alert and the freetext part would include the necessary triage steps and links needed for my team to go directly to conflueence. I don't know if this is possible but maybe someone knows.
Splunk search headで以下のクエリとした場合、service毎に2日ごとに合計量が表示されてしまいます。 timechart limit=0 useother=false span=2d count by service   2日おきに、集計計日のみの合計量を出したいのですが、どのようなクエリになりますでしょうか?
In our env, we've had a high value for remote.s3.multipart_upload.part_size to fix a bug present in versions prior to 8.1. When I recently reverted the setting to its default, these logs started popp... See more...
In our env, we've had a high value for remote.s3.multipart_upload.part_size to fix a bug present in versions prior to 8.1. When I recently reverted the setting to its default, these logs started popping up from time to time:   WARN S3Client [34628 cachemanagerUploadExecutorWorker-0] - command=multipart-upload command=put transactionId=.... rTxnId=...status=completed success=N uri=... statusCode=500 statusDescription="Internal Server Error" payload="<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message><RequestId>...</RequestId><HostId>...</HostId></Error>" ERROR S3Client [103868 cachemanagerUploadExecutorWorker-0] - multipart-upload command=end transactionId=... rTxnId=... parts=5 uploadId="..." status=completed success=N payload="<Error><Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message><RequestId>...</RequestId><HostId>...</HostId></Error>" I checked, the uploaded file is the same size as the local one. I was also able to confirm the remote and local files have the same CRC, so it appears the upload worked. The file modtime in the S3 console is three seconds before the failed upload error message is recorded. My guess is there was a temporary issue, and the retry succeeded. I can't find any follow-up logs about the bucket id after the error message. Does anyone else observe this, should I just accept these error messages? Is there something I can do about it?
Hi, all! I came across with an issue with extracting fields from syslog. Here are some samples of the value which is "Call_Session_ID" I want to extract: JKYFnxBdcIIiBImIMsoJm67 tMKtr5WNYa2e9PqC1c... See more...
Hi, all! I came across with an issue with extracting fields from syslog. Here are some samples of the value which is "Call_Session_ID" I want to extract: JKYFnxBdcIIiBImIMsoJm67 tMKtr5WNYa2e9PqC1cBhswf YoKwDKa_K9m4SS1qzbecNbl hGpydwuxLF_iYw5AE0pe81g F440sxU_Ntqg2zswAXgt-lW Here's the regular expression generated by Splunk: ^[^\|\n]*\|(?P<Call_Session_ID>\w+) Here's some sample events: 2022-01-25 12:08:04,925|F440sxU_Ntqg2zswAXgt-lW|INFO|com.hsbc.amh.civr.fallout.node.AmhCivrGenesysXferNode|execute()|***End Call*** 2022-01-25 12:11:49,229|pbDdnF8QT6Bku0odJ4SL_Q8|INFO|com.hsbc.amh.civr.endcall.node.AmhCivrExitNode|execute()|***End Call*** 2022-01-25 12:27:03,958|42dHIbXvXBKqG20u_m3kU5R|INFO|com.ibm._jsp._xfer_5F_genesys:svf.nodename|_jspService()|Contact Data Sent: UD_DIALLED_SERVICE:OneNumber_Jade~UD_IVR_STARTCALL_REF:0~UD_IS_HANGUP:N~UD_CUSTOMER_TYPE:Jade~UD_FALLOUT_SECTION:BankPaymentTransfer~UD_PROPOSITION:Jade~UD_SUBPROPOSITION:GeneralBanking~UD_LANGUAGE:Cantonese~UD_FALLOUT_QUEUE:Default~UD_COUNTRY_CODE:HKCC~UD_FALLOUT_REASON:Agent Some facts about the log files: Call_Session_ID is followed by the everytime.  But there's some errors occurring when the results come out: Firstly, there's some null value in the result: Secondly, the result only shows part of the value like this: When checking back to the event, it shows that this Call_Session_ID contains a hyphen. 2022-01-25 11:59:18,032|Yih9YAueLZSJ-va5ZAVllOc|INFO|com.hsbc.amh.civr.endcall.node.AmhCivrExitNode|execute()|***End Call***  How could I solve the problem?        
Estoy tratando de desintalar el agente Splunl en un servidor y me sale este mensaje intente tambien hacerlo por linea de comando y me muestra un error parecido pero con el mismo mensaje como pued... See more...
Estoy tratando de desintalar el agente Splunl en un servidor y me sale este mensaje intente tambien hacerlo por linea de comando y me muestra un error parecido pero con el mismo mensaje como puedo hacer para resolver este problema.
My query after finalizing for some time , gives me,  The search processs with sid= was forcefully terminated because its physical memory usage  has exceeded the 'search_process_memory_usage_threshol... See more...
My query after finalizing for some time , gives me,  The search processs with sid= was forcefully terminated because its physical memory usage  has exceeded the 'search_process_memory_usage_threshold'  setting in limits.conf. I am not allowed to increase memory... Any suggestion how to tweak the query  to avoid forceful termination? ================= (index=bsa) sourcetype=wf:esetext:user_banks:db OR sourcetype=wf:esetext:soc_sare_data:db au!="0*" | stats values(bank_name) as bank_name , values(bank_type) as type , values(pwd_expires) as pwd_expires , values(is_interactive) as is_interactive , values(au_owner_name) as au_owner_name , values(au_owner_email) as au_owner_email , values(service_bank_name) as service_bank_name , values(owner_elid) as owner_elid, , values(manager_name) as manager_name BY au | eval bank_name=coalesce(bank_name,service_bank_name) | eval user=lower(bank_name) | dedup user | rex field=user "[^:]+:(?<user>[^\s]+)" | fields - bank_name | stats values(au_owner_email) as au_owner_email , values(au_owner_name) as au_owner_name , values(owner_elid) as owner_elid , max(manager_name) as manager_name BY user ,service_bank_name ,type ,pwd_expires ,is_interactive
New to the community so all help is appreciated! Requirement We have a requirement to filter some network data in a correlation search to return any data which has a public ip in the "src" or "dest... See more...
New to the community so all help is appreciated! Requirement We have a requirement to filter some network data in a correlation search to return any data which has a public ip in the "src" or "dest" field.  Solution I tried several variants of this: ... | search (src!="10.0.0.0/8" OR src!="192.168.0.0/16" OR src!="172.16.0.0/12") AND (dest!="10.0.0.0/8" OR  dest!="192.168.0.0/16" OR dest!="172.16.0.0/12") I boiled it down to this, which also does not work: ... | search src!="10.0.0.0/8" AND dest!="10.0.0.0/8" It appears that my query is evaluating the first "OR" individually, meaning that no matter what I set the dest!= filter to it does not return results. My Request Clearly I don't understand the logic being used for OR/AND operators and a better understanding of that would be appreciated. Ultimately though, I'm not stuck on this logic, so if there is a better way to only return results which has a public ip in the src OR dest fields I'm happy to learn the best way to do that as well! Thanks in advance for the help!
Hello,  I see following in _raw.  However, when I run search with table or fields it does not display text within double quote despite its in _raw.  "ptp-slave" does not get displayed as a value for... See more...
Hello,  I see following in _raw.  However, when I run search with table or fields it does not display text within double quote despite its in _raw.  "ptp-slave" does not get displayed as a value for field title. it only displays PTP State is ptp-listeining, should be.  2022-01-25 21:47:57.047, id="12342ee7c-757e-71ec-1090-777c841f0000", version="15119", title="PTP State is ptp-listening, should be "ptp-slave"", state="OPEN", severity="CRITICAL", priority="MEDIUM", --> there are more fields after this.  How can get entire test to be displayed. 
I have scheduled report with 10 appended queries with tstats ,  and report runs weekly to push data to kv store , now out of 10 queries I need result for 7 queries but don't want to delete rest  3 qu... See more...
I have scheduled report with 10 appended queries with tstats ,  and report runs weekly to push data to kv store , now out of 10 queries I need result for 7 queries but don't want to delete rest  3 queries out of 10  , Is there a way I can disable remaining 3 queries without deleting them .     
Hello, I have this ldapsearch that returns  10's of thousands of records.      | ldapsearch search=(&(objectClass=User)(!(objectClass=computer)))     I want to filter on the whenCreated attri... See more...
Hello, I have this ldapsearch that returns  10's of thousands of records.      | ldapsearch search=(&(objectClass=User)(!(objectClass=computer)))     I want to filter on the whenCreated attribute to return new users in the past 7 days, sliding window. Is it possible to perform filtering by one or more attributes on the ldapsearch command line? I know I can use Splunk evals after  the ldapsearch command to do this. Thanks and God bless, Genesius
We use Okta to authenticate and grant access to our Splunk Cloud instance. The groups and roles are already mapped.  When I have a team member leave the company, we deactivate their Okta account, so... See more...
We use Okta to authenticate and grant access to our Splunk Cloud instance. The groups and roles are already mapped.  When I have a team member leave the company, we deactivate their Okta account, so in theory, preventing them from accessing any of our apps. The SSO integration is great at creating SAML users on Splunk Cloud, but to get those accounts removed on Splunk Cloud usually requires a Splunk Support ticket and a 3-5 day turnaround. I can't use the Splunk REST API to do it because we're on cloud.  Does anyone know anything about deprovisioning automagically or getting Splunk Cloud to start working on this?  Fezzes, Swarm!
I'm trying to setup this image: mltk-container-golden-image-cpu:3.8.0 I'm able to access localhost but unable to get past the authentication page for Jupyter. It keeps giving me invalid credentials... See more...
I'm trying to setup this image: mltk-container-golden-image-cpu:3.8.0 I'm able to access localhost but unable to get past the authentication page for Jupyter. It keeps giving me invalid credentials. I changed the password using:        jupyter notebook password        It still gives me invalid credentials.
  Need better option to get user id from first search to populate results using the subsearch.  thought join would work but its not.....suggestions? index=“something”  | rex field=userName "\'(?P<... See more...
  Need better option to get user id from first search to populate results using the subsearch.  thought join would work but its not.....suggestions? index=“something”  | rex field=userName "\'(?P<userName>.+)\'" | rex "\/ROOT\/[^\s']+\/(?P<Environment>[^\/]+)\/[^\s\/']+(\s|')" | search NOT (Environment=eu-central-1 OR Environment=BOLCOM) | rename commandTime as "Date/Time" | join userName     [ search index=okta sourcetype="OktaIM2:user" AND profile.department=*     | rename profile.employeeNumber as userName, profile.division as Department, profile.title as Title, profile.countryCode AS Country     | table userName, Department, Title, Country] | dedup Department, Title, Environment | table "Date/Time", Department, Title, Environment, Country
I was trying to join multiple lines generated in /var/log/secure. I tried with transaction but looks like that doesn't work in this case. Below is the example of secure file.  In this case, I want t... See more...
I was trying to join multiple lines generated in /var/log/secure. I tried with transaction but looks like that doesn't work in this case. Below is the example of secure file.  In this case, I want to combine all these lines based on a common text "sshd[288792]". Your help on this would be really appreciated. I cannot search with same keyword as id in this sshd "288792" will be different for each sessions.  Jan 25 18:34:06 SERVER1 sshd[288792]: Connection from xxx.xxxx.xxx.xxx port xxxx on xxx.xxx.xxx.xxx port xx Jan 25 18:34:10 SERVER1 sshd[288792]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid= Jan 25 18:34:10 SERVER1 sshd[288792]: pam_sss(sshd:auth): User info message: Your password will expire Jan 25 18:34:10 SERVER1 sshd[288792]: pam_sss(sshd:auth):  success; logname= uid=0 euid=0 Jan 25 18:34:10 SERVER1 sshd[288792]: Accepted  for xxxx from xxx.xxx.xxx.xxx port xxxxx xxx Jan 25 18:34:10 SERVER1 sshd[288792]: pam_unix(sshd:session): session opened for user xxxxx by (uid=0) Jan 25 18:34:10 SERVER1 sshd[288792]: User child is on pid 289788