All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear Splunker i need a search that gets me if  theres a host that has these logs, below is a psudeo search that show what i really want: index=linux host=* sourcetype=bash_history AND ("systemc... See more...
Dear Splunker i need a search that gets me if  theres a host that has these logs, below is a psudeo search that show what i really want: index=linux host=* sourcetype=bash_history AND ("systemctl start" OR "systemctl enable") | union [search index=linux host=* sourcetype=bash_history (mv AND /opt/ ) ] just to make more clearer, i want a match only  if a server generated a log that contains "mv AND /opt/" and another log that contains "systemctl start" OR "systemctl enable"       thanks in advance
Does the following search help? This uses json_ functions and mvexpand to split out and then match up the fields and expressions: | datamodel | spath output=modelName modelName |search modelName=Ne... See more...
Does the following search help? This uses json_ functions and mvexpand to split out and then match up the fields and expressions: | datamodel | spath output=modelName modelName |search modelName=Network_Traffic | eval objects=json_array_to_mv(json_extract(_raw,"objects")) | mvexpand objects | eval calculations=json_array_to_mv(json_extract(objects,"calculations")) | mvexpand calculations | eval outputFields=json_array_to_mv(json_extract(calculations,"outputFields")) | mvexpand outputFields | eval fieldName=json_extract(outputFields,"fieldName") | eval expression=json_extract(calculations,"expression") | table modelName fieldName expression  
It looks like your time extraction settings are corrrect, however you need to add MAX_DAYS_AGO to be a higher value (eg 3000) for Splunk to accept that 2017 timestamp as the default is 2000 and there... See more...
It looks like your time extraction settings are corrrect, however you need to add MAX_DAYS_AGO to be a higher value (eg 3000) for Splunk to accept that 2017 timestamp as the default is 2000 and therefore Splunk is not accepting the date. Let me know if adding MAX_DAYS_AGO=3000 to your extraction config works! Good luck Will
Hello everyone,   I’m having trouble getting Splunk to recognize timestamps correctly, and I hope someone can help me out. I’m importing an access log file, where the timestamps are formatted like ... See more...
Hello everyone,   I’m having trouble getting Splunk to recognize timestamps correctly, and I hope someone can help me out. I’m importing an access log file, where the timestamps are formatted like this:   [01/Jan/2017:02:16:51 -0800] here also a live output: However, Splunk is not recognizing these timestamps and instead assigns the indexing time.   I have tried adjusting the settings in the sourcetype configuration (see screenshot) and have set the following values: • Timestamp format: %d/%b/%Y:%H:%M:%S %z • Timestamp prefix: \[ • Lookahead: 32   Unfortunately, the timestamps are still not recognized correctly. Do I need to modify props.conf or inputs.conf as well? Is my timestamp format correct, or should it be defined differently? Could there be another issue in my extraction settings?   The log file looks like this: Should I maybe change the log file with some scripting in order to change the format?   I would really appreciate any guidance! Thank you in advance.   Best regards
Hi @isoutamo , Thx a lot  . BR
Hi @SanjayReddy  Thanks for the feedback, that screenshot is when receiver is a forwarder. This is a good explanation https://community.splunk.com/t5/Knowledge-Management/Splunk-Indexer-Forward... See more...
Hi @SanjayReddy  Thanks for the feedback, that screenshot is when receiver is a forwarder. This is a good explanation https://community.splunk.com/t5/Knowledge-Management/Splunk-Indexer-Forwarder-Acknowledgement-explained/m-p/695624 as @isoutamo mentioned. Thanks. 
Hi @takuyaikeda , please try this: index=_audit action=search info=granted search=* NOT "search_id='scheduler" NOT "search=' | history" NOT "user=splunk-system-user" NOT "search='typeahead" NOT "se... See more...
Hi @takuyaikeda , please try this: index=_audit action=search info=granted search=* NOT "search_id='scheduler" NOT "search=' | history" NOT "user=splunk-system-user" NOT "search='typeahead" NOT "search=' | metadata type=* | search totalCount>0" | stats count by user search _time | sort _time | convert ctime(_time) | stats list(_time) as time list(search) as search by user Ciao. Giuseppe
Hello, Is there any way to get fieldname and its expression from datamodel using rest api(using splunk query)? I am already using this query but here fields and its expressions are shuffled.   ... See more...
Hello, Is there any way to get fieldname and its expression from datamodel using rest api(using splunk query)? I am already using this query but here fields and its expressions are shuffled.   | datamodel | spath output=modelName modelName |search modelName=Network_Traffic |rex max_match=0 field=_raw "\[\{\"fieldName\":\"(?<fields>[^\"]+)\"" |rex max_match=0 field=_raw "\"expression\":\"(?<expression>.*?)\"}" |table fields expression        
We operate by using scheduled searches to periodically search through logs collected by Splunk, and trigger actions when log entries matching certain conditions are found. You can create a list of a... See more...
We operate by using scheduled searches to periodically search through logs collected by Splunk, and trigger actions when log entries matching certain conditions are found. You can create a list of actions triggered recently (for example, within the past week) by searching for alert_fired="alert_fired" in the _audit index. At this time, is it possible to join the log entries that matched in each search execution to the list? (I want to know the result of "| loadjob <sid>" for each search.) The expected output is a table with the search execution time (_time), the search name (ss_name), and the log entries.
@nsxlogging   Your company’s security policy may be blocking the download of the Splunk app or add-on from Splunkbase to resolve this, forward the error to your IT/security team so they can check fi... See more...
@nsxlogging   Your company’s security policy may be blocking the download of the Splunk app or add-on from Splunkbase to resolve this, forward the error to your IT/security team so they can check firewall/proxy logs, verify if Splunkbase or specific file types are restricted, and whitelist them if justified, or try downloading from a different network (if permitted) or a non-corporate device and transfer via approved methods.
@cyberbilliam  Is this fixed? Need confirmation before migrating to Splunk Cloud.
Actually it needs that replication factor has met on indexers before the ack has sent. You should read below post and also those where are linked there. Here is one old excellent post about it ht... See more...
Actually it needs that replication factor has met on indexers before the ack has sent. You should read below post and also those where are linked there. Here is one old excellent post about it https://community.splunk.com/t5/Knowledge-Management/Splunk-Indexer-Forwarder-Acknowledgement-explained/m-p/695624
Hi @Wenjian_Zhu   Indexer acknowledgment will be sent after data written into the disk of indexer.  there is no relation with data replication with indexer acknowledgment acknowledgment is t... See more...
Hi @Wenjian_Zhu   Indexer acknowledgment will be sent after data written into the disk of indexer.  there is no relation with data replication with indexer acknowledgment acknowledgment is to let forwarders know data has been received at the indexer end and forwarder which sent data to indexer , will remove the events from the wait queue. also recommended to enable   acknowledgment at at intermediate forwader and indexer   
Dear splunkers, When set useAck = true (https://docs.splunk.com/Documentation/Splunk/9.4.0/Forwarding/Protectagainstlossofin-flightdata). The source peer sends acknowledgment after writing the data... See more...
Dear splunkers, When set useAck = true (https://docs.splunk.com/Documentation/Splunk/9.4.0/Forwarding/Protectagainstlossofin-flightdata). The source peer sends acknowledgment after writing the data to its file system and ensuring the replication factor is met  or The source peer sends acknowledgment after writing the data to its file system.   Best regards,
Thank you it worked with count=0
Yup. But if you have a big dashboard, especially powered by badly written searches, and a very short refresh time... That's not gonna end well
HI Thanks - I Got it. https://classic.splunkbase.splunk.com/app/3119/ However why is this happening,  there are lots of functions in the normal dashboard that are not in the dashboard studio, so h... See more...
HI Thanks - I Got it. https://classic.splunkbase.splunk.com/app/3119/ However why is this happening,  there are lots of functions in the normal dashboard that are not in the dashboard studio, so how come we are forced over. Or is this nothing to do with the new Dashboard studio? + I don't see many questions about DS being asked or answered. Any help + insights would be great on this. Robert
To receive help in Splunk search, it is best to give more concrete information, even if you use mock names and values. Assuming the two different sources are sourcetype sourceA and sourceB.  The 3 p... See more...
To receive help in Splunk search, it is best to give more concrete information, even if you use mock names and values. Assuming the two different sources are sourcetype sourceA and sourceB.  The 3 parameters in sourceA are named "ID", "param2", and "param3".  Further assume that sourceB has the same field name "ID" to match that in sourceA, and that "actual name of the object" is in field named "name".  Assuming that all these fields are already extracted. sourcetype IN (sourceA, sourceB) | stats values(name) as name values(param2) as param2 values(param3) as param3 by ID  
Hi @DarrellR , as also @isoutamo said, you should put it in the main search. Ciao. Giuseppe
Hi @momagic , you have to use a subsearch: create a main query containing the data to display, adding as subsearch (putting it between square brackets and adding the search command at the beginnin... See more...
Hi @momagic , you have to use a subsearch: create a main query containing the data to display, adding as subsearch (putting it between square brackets and adding the search command at the beginning) the search containing the parameters, then you can display the fields you want. You have to put attention to two things: at the end of the subsearch yo have to use a command as table or fields to list only the fields used as filters, the fields from the subsearch must have exactly (case sensitive) the same names of the fields in the main search. For example, if the fields to use to filter events are FieldA and FieldB but ib the subsearch there are also other fields, you should write: index=index1 [ search index=index2 | fields FieldA FieldB ] | table _time host field1 field2 FieldA FieldB If you haven't much experience on Splunk searches and you didn't followed a course (there are many free courses in Splunk), you could follow the Splunk Search Tutorial (https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/SearchTutorial/WelcometotheSearchTutorial) that explain how to use Splunk for searching, and here you can find a description of how to use subsearches https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/SearchTutorial/Useasubsearch Ciao. Giuseppe Ciao. Giuseppe