All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I reset the data on a host in TrackMe as we were having issues with alerts. The host is no longer showing in TrackMe (monitored_state=ALL) but I can see it under Data Host Monitoring Collection... See more...
Hi, I reset the data on a host in TrackMe as we were having issues with alerts. The host is no longer showing in TrackMe (monitored_state=ALL) but I can see it under Data Host Monitoring Collection. In the Collection output "data_host_st_summary", "data_index" and "data_sourcetype" are all blank fields. How do I go about populating these fields with the correct data again?  This is a quiet host with few events and it is not being detected when I run the short/long term tracker reports and cannot find a way to be able to manually add the missing data. Thanks.
base search | fields _time host pdfpath status | stats values(pdfpath) as pdfpath values(host) as host by _time status | table _time host status pdfpath Example Log _time host status pdfpath 20... See more...
base search | fields _time host pdfpath status | stats values(pdfpath) as pdfpath values(host) as host by _time status | table _time host status pdfpath Example Log _time host status pdfpath 2021-09-08 08:00:00.359 hostA processing /20210907/xxxx_live.3.21.cv.1866.13428730.1 2021-09-08 08:00:00.458 hostB processing /20180821/xxxx_live.1.18.cr.403.19409265.0 2021-09-08 08:00:00.462 hostB processing /20180821/xxxx_live.1.18.cr.403.19409265.0 2021-09-08 08:00:00.473 hostA finished /20210907/xxxx_live.3.21.cv.1866.13428730.1 2021-09-08 08:00:00.477 hostC processing /tmp/HL_end_state379145533207037128.pdf 2021-09-08 08:00:00.500 hostC finished /tmp/HL_end_state379145533207037128.pdf I am looking for a way to trigger an alert when a host does not finish processing a pdfpath. Using the example above, hostB is having trouble processing it's pdfpath, as there is no correlating "finished" status as there is for hostA and hostC. The output could be just the earliest and latest time the file was processed and a count of the attempts to process the file. It would be cool also to have a new status called "failed" to easily count the number of failures. Output example: earliest_time latest_time host number_of_attempts pdfpath status 2021-09-08 08:00:00.458 2021-09-08 08:00:00.462 hostB 2 /20180821/xxxx_live.1.18.cr.403.19409265.0 failed I'm looking for suggestions on how I could do this. Thank you.
Hi Below is a simple example of what I am trying to do. I am trying to remove the duplicate out of the process name. So I have the code for that but only run this code if service_type = agent-based... See more...
Hi Below is a simple example of what I am trying to do. I am trying to remove the duplicate out of the process name. So I have the code for that but only run this code if service_type = agent-based.  So ideal I want to run an If service_type = agent-based then eval below. However I lose the !=agent-based. that I don't want to run the eval on that.  so how to I say if agent-based run these 2 evals on that specific data and then keep the rest of the !=agent-based       | eval temp=split($Process_Name$," ") | eval Process_Name=mvindex(temp,0)       Thanks in Advance  
Hello,   I'm trying to add the appearance of a certain value in my base search count. the value is "detatched". it is written in an event, when a certain license has been used. this detatched licen... See more...
Hello,   I'm trying to add the appearance of a certain value in my base search count. the value is "detatched". it is written in an event, when a certain license has been used. this detatched license has a lifespan of 14 days, afterwards it's not active anymore and I don't need to add this to my base search anymore. so basically it's like this :  index=indexa=* licensecount=* productid=5000 earliest=-30d@d latest=now() | eval flag="basecount" | append [search index=indexa =*  productid=5000 subject="*detatched*" earliest=-45d@d latest=-31d@d  | eval flag="addcount"] | stats count(eval(flag="basecount")) as basecount count(eval(flag="addcount")) as addcount | eval totalcount = basecount+addcount |timechart span=1d count(totalcount) I know this query is partlially stupid but what I want to show is what I'm trying to accomplish. Example: Today I have a licence count of the product 5000 of 5, 14 days ago I had a count of 1, therefore today it should show me 6. tomorrow, this count of 1 shouldn't be added anymore, cause it's more than 14 days old and not active anymore. this should be seen - ideally - in a timechart.  Hope someone can make sense of this . Much appreciate any help or feedback, cause, maybe it's not possible to do so in splunk.  Thanks a lot guys
If I have corelationId then how to find out with the query that how many times a particular client/method/api is being called.
Good morning everyone,   I am trying to ingest a log that does not roll over after a new, only when the service that writes the log is restarted. We have done some testing using cRcSalt and so far ... See more...
Good morning everyone,   I am trying to ingest a log that does not roll over after a new, only when the service that writes the log is restarted. We have done some testing using cRcSalt and so far that has not helped to continually monitor the file as it is written.    Any advice would be appreciated.  inputs.conf  [monitor://E:\Tomcat 9.0\logs\tomcat9-stdout.*.log] sourcetype = test index = test blacklist = \.(gz|bz2|z|zip)$ disabled = false CRCSALT = <SOURCE>   Props.conf [test] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = true CHECK_FOR_HEADER = false CHARSET = AUTO EXTRACT-SessionID = (?<=SessionID:)(?P<SessionID>.+) EXTRACT-Result = (?<=VerificationResult:)(?P<Result>.+) EXTRACT-UserName = (?<=User:)(?P<UserName>.+) EXTRACT-Response = (?<=Account Response:)(?P<Response>.+) EXTRACT-Second_Response = (?<=Verification_test:)(?P<Second_Response>.+)
Hi. I have a data model that consists of two root event datasets. Both accelerated using simple SPL. First dataset I can access using the following   | tstats summariesonly=t count FROM datamodel... See more...
Hi. I have a data model that consists of two root event datasets. Both accelerated using simple SPL. First dataset I can access using the following   | tstats summariesonly=t count FROM datamodel=model_name where nodename=dataset_1 by dataset_1.FieldName   But for the 2nd root event dataset, same format doesn't work. For that, I get events only by referencing the dataset along with the datamodel.   | tstats summariesonly=t count FROM datamodel=model_name.dataset_2 by dataset_2.FieldName   e.g., the following will not work.   | tstats summariesonly=t count FROM datamodel=model_name where nodename=dataset_2 by dataset_2.FieldName     I am trying to understand what causes splunk search to work differently on these datasets when both are at the same level? Thanks, ~ Abhi
Actually Netapp storage is configured with splunk, now we want to configure again from splunk to checkmk monitoring tool. will it possible?  
Hey splunkers, How do I create a new field in splunk?   If I have a windows security log with "User" field and I want to call it and use it as "Account". I tried we Eval but didn't succeed.  Tha... See more...
Hey splunkers, How do I create a new field in splunk?   If I have a windows security log with "User" field and I want to call it and use it as "Account". I tried we Eval but didn't succeed.  Thanks.
Hello, we have problem with useACK, there are known bugs with our UF 7.3.4 : https://docs.splunk.com/Documentation/Splunk/7.3.4/ReleaseNotes/KnownIssues : SPL-171178, SPL-167307, SPL-202078 Disabli... See more...
Hello, we have problem with useACK, there are known bugs with our UF 7.3.4 : https://docs.splunk.com/Documentation/Splunk/7.3.4/ReleaseNotes/KnownIssues : SPL-171178, SPL-167307, SPL-202078 Disabling indexer ack from forwarder is not acceptable in our case so which solution is best? Schedule regular restarts? Downgrade? Thanks for your contributions.  
I have a problem similar to this: Scripted Input timeout . In modular input, i use python script to collect data, and in most cases, the single collection takes 10 minutes. But my interval is set t... See more...
I have a problem similar to this: Scripted Input timeout . In modular input, i use python script to collect data, and in most cases, the single collection takes 10 minutes. But my interval is set to 5 minutes. Splunk will run the first one and wait until it finishes to start the next one? Right? And the missed scheduling will be directly ignored?
Hi all, Just note that the macro 'cim_Authentication_indexes` of Splunk_SA_CIM has definition like following: [cim_Authentication_indexes] definition = () What does it mean? Sorry for the newbie... See more...
Hi all, Just note that the macro 'cim_Authentication_indexes` of Splunk_SA_CIM has definition like following: [cim_Authentication_indexes] definition = () What does it mean? Sorry for the newbie question.   Thanks a lot. Regards
Hello,  Whenever I tried to create a notable event by "Configure -> Incident Management -> New Notable Event", the website seems to crash, giving a weird error     I wanted to create a notabl... See more...
Hello,  Whenever I tried to create a notable event by "Configure -> Incident Management -> New Notable Event", the website seems to crash, giving a weird error     I wanted to create a notable event so that my Incident Review is not blank
Hi, Im using ver 4.1.5 of the cloud services Add-on on my HF Splunk ver 8.0.9. I've configured an Azure App Account in the App and a input for collecting Azure Devops Audit data. But im not getting... See more...
Hi, Im using ver 4.1.5 of the cloud services Add-on on my HF Splunk ver 8.0.9. I've configured an Azure App Account in the App and a input for collecting Azure Devops Audit data. But im not getting any logs in to Splunk. Im getting below warning message in "splunk_ta_microsoft_cloudservices_mscs_azure_event_hub_AzureDevopsAudit.log" 2021-09-09 08:22:45,926 level=WARNING pid=84608 tid=Thread-2 logger=uamqp.authentication.cbs_auth pos=cbs_auth.py:handle_token:122 | Authentication Put-Token failed. Retries exhausted. CPU rises to 90% when input is enabled. Any ideas?   Regards, Martin
Hey splunkers,  How can I correlate rules in Splunk from 2 data sources?  The events for example: OKTA - privilege granted index="network" sourcetype="OktaIM2:log" eventType="user.account.privile... See more...
Hey splunkers,  How can I correlate rules in Splunk from 2 data sources?  The events for example: OKTA - privilege granted index="network" sourcetype="OktaIM2:log" eventType="user.account.privilege.grant" + Windows - Event Auditing disabled index="WinEventLog" sourcetype="WinEventLog" EventCode="4719" AuditPolicyChanges="removed" I want to correlate first Okta event and then the Windows event with the same field (for example Username) in 10 min.    
I am running SPLUNK version 8.0.10 with Lookup Editor version 3.4.6.  Noted that I have a problem in my CSV's wherein if I scroll using my mouse to the bottom of a long list (say 300 lines), the scro... See more...
I am running SPLUNK version 8.0.10 with Lookup Editor version 3.4.6.  Noted that I have a problem in my CSV's wherein if I scroll using my mouse to the bottom of a long list (say 300 lines), the scroll bar jumps back to the top of the CSV.  I have to literally click on the side scroll bar and drag to the bottom to perform any edits (particularly if I want to add a new line).   I have seen there have been other bugs with 3.4.6 and the only version that is "supported" under 8.0 is 3.4.6.  3.5.0 which is available for download only supports 8.1+.    
hi. I have a txt file include many strings, and  many logs from my web server that indexed. I want to find the logs that at least match with one of the string in txt file. how to search and query ... See more...
hi. I have a txt file include many strings, and  many logs from my web server that indexed. I want to find the logs that at least match with one of the string in txt file. how to search and query for this goal? thanks. for example: txt file: mosConfig.absolute.path and logs: http://localhost/index.php?option=com_sef&Itemid=&mosConfig.absolute.path=[shell.txt?] and output: http://localhost/index.php?option=com_sef&Itemid=&mosConfig.absolute.path=[shell.txt?]
Hello Gurus! I am sure some people may have run in to this.   I am using extract command to parse fields from multi line unstructured event, but the data values are encapsulated by single quotes. H... See more...
Hello Gurus! I am sure some people may have run in to this.   I am using extract command to parse fields from multi line unstructured event, but the data values are encapsulated by single quotes. Here is the example : ====EVENT 1======== 2021-09-08 00:00:00 ABC status - performance event     name : 'James Bond'     address : 'USA'     age : '100'     occupation : 'spy' performance event END ================== So the the following event, I am using transforms to  transforms.conf [performance_data] DELIMS = "\r\n", ":" So above transforms partially works.  The problem is the values has single quote ' encapsulated. Like this Field name "name"  with value "'James Bond'".   single quote included.  How can I get rid of the single quote?
I have logs in the format of json where message is the key and message contains the value mentioned below   message:  <ErrorMessage>E-delivery failed<ErrorMessage> When i am searching like below ... See more...
I have logs in the format of json where message is the key and message contains the value mentioned below   message:  <ErrorMessage>E-delivery failed<ErrorMessage> When i am searching like below in the splunk, able to search the events index="*" source="*" "E-delivery failed" If i want to display the count of E-delivery failed string, the results are not fetching as the value under message tag is xml. Query used is: index="*" source="*" | eval type=case(like(message, "%E-delivery failed%"),"e delivery failed")|stats count as Results by type With the above query not able to get any results. Please help me with the query.   Result should be: type                                  count e delivery failed             10  
Hi,  I have a data source of  call records for phone calls. This data contains a field "A_Number". I want to class any "A_Number" that begins with 04 as "Mobile" and anything else as "Fixed". Then I ... See more...
Hi,  I have a data source of  call records for phone calls. This data contains a field "A_Number". I want to class any "A_Number" that begins with 04 as "Mobile" and anything else as "Fixed". Then I want to timechart a count of fixed and mobile events.