All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In our env, we've had a high value for remote.s3.multipart_upload.part_size to fix a bug present in versions prior to 8.1. When I recently reverted the setting to its default, these logs started popp... See more...
In our env, we've had a high value for remote.s3.multipart_upload.part_size to fix a bug present in versions prior to 8.1. When I recently reverted the setting to its default, these logs started popping up from time to time:   WARN S3Client [34628 cachemanagerUploadExecutorWorker-0] - command=multipart-upload command=put transactionId=.... rTxnId=...status=completed success=N uri=... statusCode=500 statusDescription="Internal Server Error" payload="<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message><RequestId>...</RequestId><HostId>...</HostId></Error>" ERROR S3Client [103868 cachemanagerUploadExecutorWorker-0] - multipart-upload command=end transactionId=... rTxnId=... parts=5 uploadId="..." status=completed success=N payload="<Error><Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message><RequestId>...</RequestId><HostId>...</HostId></Error>" I checked, the uploaded file is the same size as the local one. I was also able to confirm the remote and local files have the same CRC, so it appears the upload worked. The file modtime in the S3 console is three seconds before the failed upload error message is recorded. My guess is there was a temporary issue, and the retry succeeded. I can't find any follow-up logs about the bucket id after the error message. Does anyone else observe this, should I just accept these error messages? Is there something I can do about it?
Hi, all! I came across with an issue with extracting fields from syslog. Here are some samples of the value which is "Call_Session_ID" I want to extract: JKYFnxBdcIIiBImIMsoJm67 tMKtr5WNYa2e9PqC1c... See more...
Hi, all! I came across with an issue with extracting fields from syslog. Here are some samples of the value which is "Call_Session_ID" I want to extract: JKYFnxBdcIIiBImIMsoJm67 tMKtr5WNYa2e9PqC1cBhswf YoKwDKa_K9m4SS1qzbecNbl hGpydwuxLF_iYw5AE0pe81g F440sxU_Ntqg2zswAXgt-lW Here's the regular expression generated by Splunk: ^[^\|\n]*\|(?P<Call_Session_ID>\w+) Here's some sample events: 2022-01-25 12:08:04,925|F440sxU_Ntqg2zswAXgt-lW|INFO|com.hsbc.amh.civr.fallout.node.AmhCivrGenesysXferNode|execute()|***End Call*** 2022-01-25 12:11:49,229|pbDdnF8QT6Bku0odJ4SL_Q8|INFO|com.hsbc.amh.civr.endcall.node.AmhCivrExitNode|execute()|***End Call*** 2022-01-25 12:27:03,958|42dHIbXvXBKqG20u_m3kU5R|INFO|com.ibm._jsp._xfer_5F_genesys:svf.nodename|_jspService()|Contact Data Sent: UD_DIALLED_SERVICE:OneNumber_Jade~UD_IVR_STARTCALL_REF:0~UD_IS_HANGUP:N~UD_CUSTOMER_TYPE:Jade~UD_FALLOUT_SECTION:BankPaymentTransfer~UD_PROPOSITION:Jade~UD_SUBPROPOSITION:GeneralBanking~UD_LANGUAGE:Cantonese~UD_FALLOUT_QUEUE:Default~UD_COUNTRY_CODE:HKCC~UD_FALLOUT_REASON:Agent Some facts about the log files: Call_Session_ID is followed by the everytime.  But there's some errors occurring when the results come out: Firstly, there's some null value in the result: Secondly, the result only shows part of the value like this: When checking back to the event, it shows that this Call_Session_ID contains a hyphen. 2022-01-25 11:59:18,032|Yih9YAueLZSJ-va5ZAVllOc|INFO|com.hsbc.amh.civr.endcall.node.AmhCivrExitNode|execute()|***End Call***  How could I solve the problem?        
Estoy tratando de desintalar el agente Splunl en un servidor y me sale este mensaje intente tambien hacerlo por linea de comando y me muestra un error parecido pero con el mismo mensaje como pued... See more...
Estoy tratando de desintalar el agente Splunl en un servidor y me sale este mensaje intente tambien hacerlo por linea de comando y me muestra un error parecido pero con el mismo mensaje como puedo hacer para resolver este problema.
My query after finalizing for some time , gives me,  The search processs with sid= was forcefully terminated because its physical memory usage  has exceeded the 'search_process_memory_usage_threshol... See more...
My query after finalizing for some time , gives me,  The search processs with sid= was forcefully terminated because its physical memory usage  has exceeded the 'search_process_memory_usage_threshold'  setting in limits.conf. I am not allowed to increase memory... Any suggestion how to tweak the query  to avoid forceful termination? ================= (index=bsa) sourcetype=wf:esetext:user_banks:db OR sourcetype=wf:esetext:soc_sare_data:db au!="0*" | stats values(bank_name) as bank_name , values(bank_type) as type , values(pwd_expires) as pwd_expires , values(is_interactive) as is_interactive , values(au_owner_name) as au_owner_name , values(au_owner_email) as au_owner_email , values(service_bank_name) as service_bank_name , values(owner_elid) as owner_elid, , values(manager_name) as manager_name BY au | eval bank_name=coalesce(bank_name,service_bank_name) | eval user=lower(bank_name) | dedup user | rex field=user "[^:]+:(?<user>[^\s]+)" | fields - bank_name | stats values(au_owner_email) as au_owner_email , values(au_owner_name) as au_owner_name , values(owner_elid) as owner_elid , max(manager_name) as manager_name BY user ,service_bank_name ,type ,pwd_expires ,is_interactive
New to the community so all help is appreciated! Requirement We have a requirement to filter some network data in a correlation search to return any data which has a public ip in the "src" or "dest... See more...
New to the community so all help is appreciated! Requirement We have a requirement to filter some network data in a correlation search to return any data which has a public ip in the "src" or "dest" field.  Solution I tried several variants of this: ... | search (src!="10.0.0.0/8" OR src!="192.168.0.0/16" OR src!="172.16.0.0/12") AND (dest!="10.0.0.0/8" OR  dest!="192.168.0.0/16" OR dest!="172.16.0.0/12") I boiled it down to this, which also does not work: ... | search src!="10.0.0.0/8" AND dest!="10.0.0.0/8" It appears that my query is evaluating the first "OR" individually, meaning that no matter what I set the dest!= filter to it does not return results. My Request Clearly I don't understand the logic being used for OR/AND operators and a better understanding of that would be appreciated. Ultimately though, I'm not stuck on this logic, so if there is a better way to only return results which has a public ip in the src OR dest fields I'm happy to learn the best way to do that as well! Thanks in advance for the help!
Hello,  I see following in _raw.  However, when I run search with table or fields it does not display text within double quote despite its in _raw.  "ptp-slave" does not get displayed as a value for... See more...
Hello,  I see following in _raw.  However, when I run search with table or fields it does not display text within double quote despite its in _raw.  "ptp-slave" does not get displayed as a value for field title. it only displays PTP State is ptp-listeining, should be.  2022-01-25 21:47:57.047, id="12342ee7c-757e-71ec-1090-777c841f0000", version="15119", title="PTP State is ptp-listening, should be "ptp-slave"", state="OPEN", severity="CRITICAL", priority="MEDIUM", --> there are more fields after this.  How can get entire test to be displayed. 
I have scheduled report with 10 appended queries with tstats ,  and report runs weekly to push data to kv store , now out of 10 queries I need result for 7 queries but don't want to delete rest  3 qu... See more...
I have scheduled report with 10 appended queries with tstats ,  and report runs weekly to push data to kv store , now out of 10 queries I need result for 7 queries but don't want to delete rest  3 queries out of 10  , Is there a way I can disable remaining 3 queries without deleting them .     
Hello, I have this ldapsearch that returns  10's of thousands of records.      | ldapsearch search=(&(objectClass=User)(!(objectClass=computer)))     I want to filter on the whenCreated attri... See more...
Hello, I have this ldapsearch that returns  10's of thousands of records.      | ldapsearch search=(&(objectClass=User)(!(objectClass=computer)))     I want to filter on the whenCreated attribute to return new users in the past 7 days, sliding window. Is it possible to perform filtering by one or more attributes on the ldapsearch command line? I know I can use Splunk evals after  the ldapsearch command to do this. Thanks and God bless, Genesius
We use Okta to authenticate and grant access to our Splunk Cloud instance. The groups and roles are already mapped.  When I have a team member leave the company, we deactivate their Okta account, so... See more...
We use Okta to authenticate and grant access to our Splunk Cloud instance. The groups and roles are already mapped.  When I have a team member leave the company, we deactivate their Okta account, so in theory, preventing them from accessing any of our apps. The SSO integration is great at creating SAML users on Splunk Cloud, but to get those accounts removed on Splunk Cloud usually requires a Splunk Support ticket and a 3-5 day turnaround. I can't use the Splunk REST API to do it because we're on cloud.  Does anyone know anything about deprovisioning automagically or getting Splunk Cloud to start working on this?  Fezzes, Swarm!
I'm trying to setup this image: mltk-container-golden-image-cpu:3.8.0 I'm able to access localhost but unable to get past the authentication page for Jupyter. It keeps giving me invalid credentials... See more...
I'm trying to setup this image: mltk-container-golden-image-cpu:3.8.0 I'm able to access localhost but unable to get past the authentication page for Jupyter. It keeps giving me invalid credentials. I changed the password using:        jupyter notebook password        It still gives me invalid credentials.
  Need better option to get user id from first search to populate results using the subsearch.  thought join would work but its not.....suggestions? index=“something”  | rex field=userName "\'(?P<... See more...
  Need better option to get user id from first search to populate results using the subsearch.  thought join would work but its not.....suggestions? index=“something”  | rex field=userName "\'(?P<userName>.+)\'" | rex "\/ROOT\/[^\s']+\/(?P<Environment>[^\/]+)\/[^\s\/']+(\s|')" | search NOT (Environment=eu-central-1 OR Environment=BOLCOM) | rename commandTime as "Date/Time" | join userName     [ search index=okta sourcetype="OktaIM2:user" AND profile.department=*     | rename profile.employeeNumber as userName, profile.division as Department, profile.title as Title, profile.countryCode AS Country     | table userName, Department, Title, Country] | dedup Department, Title, Environment | table "Date/Time", Department, Title, Environment, Country
I was trying to join multiple lines generated in /var/log/secure. I tried with transaction but looks like that doesn't work in this case. Below is the example of secure file.  In this case, I want t... See more...
I was trying to join multiple lines generated in /var/log/secure. I tried with transaction but looks like that doesn't work in this case. Below is the example of secure file.  In this case, I want to combine all these lines based on a common text "sshd[288792]". Your help on this would be really appreciated. I cannot search with same keyword as id in this sshd "288792" will be different for each sessions.  Jan 25 18:34:06 SERVER1 sshd[288792]: Connection from xxx.xxxx.xxx.xxx port xxxx on xxx.xxx.xxx.xxx port xx Jan 25 18:34:10 SERVER1 sshd[288792]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid= Jan 25 18:34:10 SERVER1 sshd[288792]: pam_sss(sshd:auth): User info message: Your password will expire Jan 25 18:34:10 SERVER1 sshd[288792]: pam_sss(sshd:auth):  success; logname= uid=0 euid=0 Jan 25 18:34:10 SERVER1 sshd[288792]: Accepted  for xxxx from xxx.xxx.xxx.xxx port xxxxx xxx Jan 25 18:34:10 SERVER1 sshd[288792]: pam_unix(sshd:session): session opened for user xxxxx by (uid=0) Jan 25 18:34:10 SERVER1 sshd[288792]: User child is on pid 289788    
I have a dashboard that shows a bunch of different metrics for some data that I have. One of the metrics compares todays counts vs an average of the last four weeks on the same weekday and up to the ... See more...
I have a dashboard that shows a bunch of different metrics for some data that I have. One of the metrics compares todays counts vs an average of the last four weeks on the same weekday and up to the same time of the day. So in essence my data looks kind of like this.   type avg_past today ---------------------------------- foo 10456 10550 bar 6 9 baz 20 30 etc...   I've got this charting to a bar graph where I can see for each type the past average vs today. What I would like to do is only show the ones where there is a statistically significant difference between the past and today. I could throw on something like this  to my search ...   | where today > (avg_past * 1.25)   And that will work ok for types that have lots of data, but for instance in my example above "bar" has 50% more data. So, it would also show up, but it's really not statistically significant. So, my predicament is that I need the percentage different to be larger the smaller the counts and smaller as the counts go up. Thoughts on how to achieve this? Thanks.
Does Splunk Enterprise  8.2.4 60 days Eval have the same limitation with the Zscaler app and Zscaler add-on like Splunk Cloud  ?   Thanks Asif
Hello I use an input time token called "timepicker" <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest>  Is there a way to call this input time token directly i... See more...
Hello I use an input time token called "timepicker" <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest>  Is there a way to call this input time token directly in my search ? Someting like this : Index=toto sourcetype=tutu earliest=$timepicker$ latest=$timepicker$  Thanks   
Hello friends. We are in the process of moving the collection of o365 events which we currently do on an on-prem HF via "Splunk_TA_microsoft-cloudservices" to SplunkCloud IDM using "splunk_ta_o365".... See more...
Hello friends. We are in the process of moving the collection of o365 events which we currently do on an on-prem HF via "Splunk_TA_microsoft-cloudservices" to SplunkCloud IDM using "splunk_ta_o365". Using the same Client ID, Client Secret, and Tenant ID, we seem to be getting similar workloads:  Aip, AzureActiveDirectory, CRM, Exchange, MicrosoftForms, MicrosoftStream, MicrosoftTeams, OneDrive, PowerApps, PowerBI, PublicEndpoint, SecurityComplianceCenter, SharePoint, SkypeForBusiness, Yammer   But if we perform a comparison of number of events, we seem to get lower amount of data using the `splunk_ta_o365` in SplunkCloud versus the `Splunk_TA_microsoft-cloudservices` in on-prem.   What seems to be the problem?
I found that the format of a sourcetype had changed some time ago. Now I need to extract the data correctly for both cases.   2022-01-11 17:40:59.000, SEVERITY="123", DESCRIPTION="ooops" 2018-01-2... See more...
I found that the format of a sourcetype had changed some time ago. Now I need to extract the data correctly for both cases.   2022-01-11 17:40:59.000, SEVERITY="123", DESCRIPTION="ooops" 2018-01-24 16:35:05 SEVERITY="112", DESCRIPTION="blabla"   Extraction for the first type of entries works with this regex that was build with splunk field extraction   ^(?P<dt>[^,]+)[^"\n]*"(?P<SEVERITY>\d+)[^=\n]*="(?P<DESCRIPTION>[^"]+)   How can the regex be expanded to split either at "," or at the second space, if the comma is missing? An idea is to capture always at the second space and remove the comma or split before SEVERITY and remove the comma. I didn't get either working. You can find the regex at https://regex101.com/r/mxdAyx/1  Thanks
Hello, I would like to assign random new "unassigned" notables to a specific user. I wanted to accomplish this via a saved search but unfortunately it did not work, and the userI am trying to assig... See more...
Hello, I would like to assign random new "unassigned" notables to a specific user. I wanted to accomplish this via a saved search but unfortunately it did not work, and the userI am trying to assign to does actually exist in the enviroment when looking up the es_notable_events lookup which also has previous actions made on notables. | inputlookup es_notable_events | search owner="unassigned" | head 10 | eval owner="usertoassign" | outputlookup es_notable_events append=true key_field=owner Is there another way to do this? What am I doing wrong? Thanks, Regards,
Hi, Splunk search query to get data last two months data. need only every Friday data in the time range for 15 mins (i.e 08 AM to 08:15 AM every friday) . example: Date                     ... See more...
Hi, Splunk search query to get data last two months data. need only every Friday data in the time range for 15 mins (i.e 08 AM to 08:15 AM every friday) . example: Date                       fieldA 21/01/2022      value1 14/01/2022      value2 07/01/2022     value3 Can anyone pls suggest how can I achieve this?
I am in the middle of configuring a standalone Splunk installation.  I am getting confused about the different attributes that can be set for overall storage and per index.  It is a very small instal... See more...
I am in the middle of configuring a standalone Splunk installation.  I am getting confused about the different attributes that can be set for overall storage and per index.  It is a very small installation with only about 30 assets connected and about 2.3TB of storage to store data for a year.  I have the following configuration so far: frozenTimePeriodinSecs = 31536000 #365 days [volume:hotwarm] path = <directory to hotwarm location> maxVolumeDataSizeMB = 178176 #174GB [volume:cold] path = <directory to cold location> maxVolumeDataSizeMB = 1970176 #1924GB [network] homePath = volume:hotwarm/network/db coldPath = volume:cold/network/colddb thawedPath = $SPLUNK_DB/network/thaweddb [windows] homePath = volume:hotwarm/windows/db coldPath = volume:cold/windows/colddb thawedPath = $SPLUNK_DB/windows/thaweddb I'm not sure how to use the maxTotalDataSizeMB with maxVolumeDataSizeMB to keep from maxTotalDataSizeMB triggering a roll to frozen before the 365 days is up.  We currently do not have any idea how much data will be coming in.  Is it good practice to set maxTotalDataSizeMB for each index to the same size as maxVolumeDataSizeMB? I have seen this practice before...   And if so, is it the maxVolumeDataSize of the cold storage, or hot/warm/cold storage combined?