All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can always specify time window in search itself. (earliest, latest, etc.)  See Time modifiers. As to "share job", Saved search aka "Report" might be a viable alternative.  After your search laun... See more...
You can always specify time window in search itself. (earliest, latest, etc.)  See Time modifiers. As to "share job", Saved search aka "Report" might be a viable alternative.  After your search launches, you can "Save as" and select Report to give it a name.
Nice!  Love these little Splunk quirks aka tricks. (For anyone who stumble upon the same needs in the future, using _ would be perfect if Total doesn't have to be nullified.  I need to null Total, so... See more...
Nice!  Love these little Splunk quirks aka tricks. (For anyone who stumble upon the same needs in the future, using _ would be perfect if Total doesn't have to be nullified.  I need to null Total, so the amount of work will be similar to reorder foreach.)
If the dashboards somehow found their way into the /default folder then you will not be able to delete them using the UI.  Otherwise, check the permissions on the dashboards to make sure you have wri... See more...
If the dashboards somehow found their way into the /default folder then you will not be able to delete them using the UI.  Otherwise, check the permissions on the dashboards to make sure you have write access to them.  Failing that, the CLI may be your best answer.
Now that we are thinking in Splunk terms , note that the ...... part in your illustration can make a difference in how best to construct a "solution".  So, I assume that Dataset A and B are NOT fro... See more...
Now that we are thinking in Splunk terms , note that the ...... part in your illustration can make a difference in how best to construct a "solution".  So, I assume that Dataset A and B are NOT from the same sources, e.g., fields b and d must come from different sources, different sourcetypes, from different periods of time, even different indices.  Without such information, volunteers have to make assumptions that may or may not be helpful.  Where such have biggest impact would be when two datasets come from differing indices and/or time periods. For simplicity, I will assume a common scenario when both datasets come from the same index and same time period.  Further assume that the only differentiating factor is sourcetype, A and B.  An effective OR would be between these two.   index=common_index ((sourcetype = A) OR (sourcetype = B))   Now, the above is often expressed as   index=common_index sourcetype IN (A, B)   Meanwhile, you may often have additional, differing search terms for A and B.  So you may want to keep those parentheses.  For example, you may want to restrict events to only those with fully populated fields of interest,   index=common_index ((sourcetype = A a=* b=* c=*) OR (sourcetype = B a=* d=* e=* f=*))   Anyway, my previous post only demonstrated how to leverage any key as "primary key", but did not include the final step in an outer join.  Here it is for your scenario:   | stats values(*) as * by a | fields a b c d e f | foreach * [mvexpand <<FIELD>>]   Using your sample datasets, the output is a b c d e f a1 b1 c1 d1 e1 f1 a1 b1 c1 d1 e1 f2 a1 b1 c1 d1 e1 f3 a1 b1 c1 d1 e2 f1 a1 b1 c1 d1 e2 f2 a1 b1 c1 d1 e2 f3 a1 b1 c1 d1 e3 f1 a1 b1 c1 d1 e3 f2 a1 b1 c1 d1 e3 f3 a1 b1 c1 d2 e1 f1 a1 b1 c1 d2 e1 f2 a1 b1 c1 d2 e1 f3 a1 b1 c1 d2 e2 f1 a1 b1 c1 d2 e2 f2 a1 b1 c1 d2 e2 f3 a1 b1 c1 d2 e3 f1 a1 b1 c1 d2 e3 f2 a1 b1 c1 d2 e3 f3 a1 b1 c1 d3 e1 f1 a1 b1 c1 d3 e1 f2 Here is an emulation that you can play with and compare with real data   | makeresults | eval _raw = "a,b,c a1,b1,c1 a2,b2,c2" | multikv forceheader=1 | fields - _* linecount | eval sourcetype = "A" | append [makeresults | eval _raw = "a,d,e,f a1,d1,e1,f1 a1,d2,e2,f2 a1,d3,e3,f3 a2,d4,e4,f4 a2,d5,e5,f5" | multikv forceheader=1 | fields - _* linecount | eval sourcetype = "B"] ``` data emulation above ```  
By "use case" I presume you mean "search".  If so, you can get all saved searches with this query | rest /servicesNS/-/-/saved/searches
Hi @Abhiram.Sahoo, At this time, I was told its best to reach out to AppD Support for more help. How do I submit a Support ticket? An FAQ 
Updating my SPL to below index=XXX source="XXX" | eval status=spath(_raw,members{}.state) | eval rs_status=case(status == "Primary", "OK", status =="ARBITER", "OK", status == "SECONDARY", "OK", st... See more...
Updating my SPL to below index=XXX source="XXX" | eval status=spath(_raw,members{}.state) | eval rs_status=case(status == "Primary", "OK", status =="ARBITER", "OK", status == "SECONDARY", "OK", status == "STARTUP", "KO", status == "RECOVERING", "KO" status == "STARTUP2", "KO", status == "UNKNOWN", "KO", status == "DOWN", "KO", status == "ROLLBACK", "KO", status == "REMOVED", "KO") | sort - _time | where rs_status="KO" below is the JSON format { "set" : "replset", "date" : ISODate("2020-03-05T05:24:45.567Z"), "myState" : 1, "term" : NumberLong(3), "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, "writeMajorityCount" : 2, "votingMembersCount" : 3, // Available starting in v4.4 "writableVotingMembersCount" : 3, // Available starting in v4.4 "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1583385878, 1), "t" : NumberLong(3) }, "lastCommittedWallTime" : ISODate("2020-03-05T05:24:38.122Z"), "readConcernMajorityOpTime" : { "ts" : Timestamp(1583385878, 1), "t" : NumberLong(3) }, "readConcernMajorityWallTime" : ISODate("2020-03-05T05:24:38.122Z"), "appliedOpTime" : { "ts" : Timestamp(1583385878, 1), "t" : NumberLong(3) }, "durableOpTime" : { "ts" : Timestamp(1583385878, 1), "t" : NumberLong(3) }, "lastAppliedWallTime" : ISODate("2020-03-05T05:24:38.122Z"), "lastDurableWallTime" : ISODate("2020-03-05T05:24:38.122Z") }, "lastStableRecoveryTimestamp" : Timestamp(1583385868, 2), "electionCandidateMetrics" : { "lastElectionReason" : "stepUpRequestSkipDryRun", "lastElectionDate" : ISODate("2020-03-05T05:24:28.061Z"), "electionTerm" : NumberLong(3), "lastCommittedOpTimeAtElection" : { "ts" : Timestamp(1583385864, 1), "t" : NumberLong(2) }, "lastSeenOpTimeAtElection" : { "ts" : Timestamp(1583385864, 1), "t" : NumberLong(2) }, "numVotesNeeded" : 2, "priorityAtElection" : 1, "electionTimeoutMillis" : NumberLong(10000), "priorPrimaryMemberId" : 1, "numCatchUpOps" : NumberLong(0), "newTermStartDate" : ISODate("2020-03-05T05:24:28.118Z"), "wMajorityWriteAvailabilityDate" : ISODate("2020-03-05T05:24:28.228Z") }, "electionParticipantMetrics" : { "votedForCandidate" : true, "electionTerm" : NumberLong(2), "lastVoteDate" : ISODate("2020-03-05T05:22:33.306Z"), "electionCandidateMemberId" : 1, "voteReason" : "", "lastAppliedOpTimeAtElection" : { "ts" : Timestamp(1583385748, 1), "t" : NumberLong(1) }, "maxAppliedOpTimeInSet" : { "ts" : Timestamp(1583385748, 1), "t" : NumberLong(1) }, "priorityAtElection" : 1 }, "members" : [ { "_id" : 0, "name" : "m1.example.net:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 269, "optime" : { "ts" : Timestamp(1583385878, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2020-03-05T05:24:38Z"), "lastAppliedWallTime": ISODate("2020-03-05T05:24:38Z"), "lastDurableWallTime": ISODate("2020-03-05T05:24:38Z"), "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1583385868, 1), "electionDate" : ISODate("2020-03-05T05:24:28Z"), "configVersion" : 1, "configTerm" : 0, "self" : true, "lastHeartbeatMessage" : "" },
In my organizational environment, there are a few alerts in the enabled state. I would like to create an inventory of all the enabled alerts and their important fields on GitHub. Is there a way to au... See more...
In my organizational environment, there are a few alerts in the enabled state. I would like to create an inventory of all the enabled alerts and their important fields on GitHub. Is there a way to automate the transfer to GitHub without requiring manual effort? All the alerts on Splunk Cloud.
So I'm working to implement a clear buttons filter on a simple XML dashboard. I'm unable to do any custom java script so I've been doing all of it within the XML. I have the functionally I'm looking ... See more...
So I'm working to implement a clear buttons filter on a simple XML dashboard. I'm unable to do any custom java script so I've been doing all of it within the XML. I have the functionally I'm looking for utilizing a link list input with condition changes to unset the tokens to the default but having issues with my submit button lining back up. No matter what I seem to do I can't get the submit button to come in line with the Clear Filters "button". If anyone could help me with getting the Submit button in line with my link list input that would be grealtly appreciated. I've have some instance agnostic XML code below so you can see what I'm talking about. Thanks!       <form theme="dark"> <label>Clear Filters</label> <fieldset submitButton="true"> <input type="multiselect" token="Choice"> <label>Choices</label> <choice value="*">All</choice> <choice value="Choice 1">Choice 1</choice> <choice value="Choice 2">Choice 2</choice> <choice value="Choice 3">Choice 3</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="link" token="Clearer" searchWhenChanged="true" id="list"> <label></label> <choice value="Clear">Clear Filters</choice> <change> <condition value="Clear"> <unset token="form.Choice"></unset> <unset token="form.Clearer"></unset> </condition> </change> </input> <html> <style> #list button{ color: white; background: green; width:50%; display: inline-block; } </style> </html> </fieldset> <row> <panel> <single> <search> <query>| makeresults | eval Message="Thanks for the help!" | table Message</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </single> </panel> </row> </form>      
Hello All, I need to monitor MongoDB Replica set for its status. For this I have to run rs.status command in admin DB for MongoDB, this will give me JSON output and i need to look for status for ... See more...
Hello All, I need to monitor MongoDB Replica set for its status. For this I have to run rs.status command in admin DB for MongoDB, this will give me JSON output and i need to look for status for replica set in that out and trigger the alert. Appreciate any pointers on this and if someone could take a look at below code provide the feedback that will be helpful, this one is for triggering the alert based on condition, I am trying to use case for this. index =XXXX | eval rs_status=case(status == "Primary", "OK", status =="ARBITER", "OK", status == "SECONDARY", "OK", status == "STARTUP", "KO", status == "RECOVERING", "KO" status == "STARTUP2", "KO", status == "UNKNOWN", "KO", status == "DOWN", "KO", status == "ROLLBACK", "KO", status == "REMOVED", "KO") | sort - _time | where status="KO"   Let me know if you see any issues here.   Regards Amit
I am looking for a Splunk Query which gives me all the enabled & disabled state use-cases. 
This worked for me, thanks!
@gcusello , that helped, however now I have to change my query as we are not receiving response for few Unique_ID so difference is showing as 0 seconds. I am using subsearch for this, so it should c... See more...
@gcusello , that helped, however now I have to change my query as we are not receiving response for few Unique_ID so difference is showing as 0 seconds. I am using subsearch for this, so it should capture events for which we received response .  Subsearch itself is not returning any results.  After this I need to work on time difference.   index=web* "Message sent to Kafka" | where UNIQUE_ID IN ( [ search index=web* "Response received from Kafka" | fields UNIQUE_ID ]) | table UNIQUE_ID, _time
Thanks @yuanliu and @bowesmana for your replies. While I thought that my original goal was clear, I have to agree with @yuanliu that I need to give a more concrete example of it (although I disagree ... See more...
Thanks @yuanliu and @bowesmana for your replies. While I thought that my original goal was clear, I have to agree with @yuanliu that I need to give a more concrete example of it (although I disagree that I was really thinking in SQL terms; I simply used the concept of 'primary key' to more succinctly say what I was trying to). I also believe that @bowesmana is getting closer to what I need. BTW, I am not very familiar with the use of OR in this fashion (are there any restrictions on the shape of those datasets? BTW, I happen to get an empty result set when I 'OR' my result sets), and also, I am not using the word 'dataset' in the official way (I simply meant a set of results from a Splunk query). So, for clarification purposes, here's what I have: Dataset A is [ search ...... | table a, b, c ], so for example here's a list of results:   a1 b1 c1  a2 b2 c2 and dataset B is [search .... | table a, d, e, f]  such as: a1 d1 e1 f1 a1 d2 e2 f2 a1 d3 e3 f3 a2 d4 e4 f4 a2 d5 e5 f5 What I need as a result of a new search is a new set of results C that look like this: a1 b1 c1 d1 e1 f1 a1 b1 c1 d2 e2 f2 a1 b1 c1 d3 e3 f3 a2 b2 c2 d4 e4 f4 a2 b2 c2 d5 e5 f5 Thanks!
I forgot to mention, one of the ultimate uses of the master list, would be to leverage it as the primary resources used for Assets in the Assets & Identities framework. It seems like this framework a... See more...
I forgot to mention, one of the ultimate uses of the master list, would be to leverage it as the primary resources used for Assets in the Assets & Identities framework. It seems like this framework already does this merge for you, but I would also like to have the master list available for other processes. It might be best just to add the individual asset lists, and let ES merge it into the `assets` table.
Hi, I just deployed the latest version 2 of SC4S and I sent syslog events from our firewall Stormshield. I checked and I didn't see a specific source for this firewall brand The box is capable of s... See more...
Hi, I just deployed the latest version 2 of SC4S and I sent syslog events from our firewall Stormshield. I checked and I didn't see a specific source for this firewall brand The box is capable of sending logs in the format RFC5424, UDP/514. I did not configure a custom filter for it and the logs are automatically recognized as UNIX OS syslog events which is wrong, they are indexed in the osnix instead of netfw. I would like to create a filter based on the source host but I don't find any examples in the official github documentation.  for version 1 there is some but I am not sure if it applies to version 2. https://splunk.github.io/splunk-connect-for-syslog/1.110.1/configuration/#override-index-or-metadata-based-on-host-ip-or-subnet-compliance-overrides any suggestion? many thanks  
Hello,   Our security team has had a need of a asset management tool to keep track of our hardware and software inventory with respect to our security processes and security controls. Our support... See more...
Hello,   Our security team has had a need of a asset management tool to keep track of our hardware and software inventory with respect to our security processes and security controls. Our support team already maintains a CMDB but it doesn't do a great job and provides almost no value as a master list or a way to audit for gaps in security control coverage.  Our team deploys a variety of tools that use agents or network discovery scans to give a partial list of asset inventories. When we do comparisons, none of them are complete enough not to have some variance from between different tools. We would like a CMDB that allows us to track our assets and our security control coverage. You cannot secure what you don't know about! One idea has been to grab asset information from all the tools using custom api input scripts and aggregate it into splunk into one kvstore table. Then we could use this table as a master list. We have the splunk deployment clients and the asset_discovery scan results, but we also have cloud delivered solutions for vuln mgmt, edr, av, mdm, etc.  I wanted to reach out to the community to see if anybody else has came across this use-case and if there are any resources anybody has to share or guidance to make this idea a reality. 
Running the SCMA app pre-migration checks in preparation for moving our environment to Cloud, we were notified of a number of old dashboards floating around using deprecated 'Advanced XML'. As most o... See more...
Running the SCMA app pre-migration checks in preparation for moving our environment to Cloud, we were notified of a number of old dashboards floating around using deprecated 'Advanced XML'. As most or all of these are no longer needed, I made the decision to delete these. However, it appears that the Search and Reporting app (where most of these dashboards reside) is not managed by our SHC deployer, and the old dashboards themselves cannot be deleted from the GUI settings > user interface > views. As shown below, most dashboards (top) have a Delete option, but none of the AXML dashboards allow this action.      Other than manually 'rm -rf'ing on the backend for all our search heads, is there another way I can easily delete these dashboards?
Two different sources returning data in the below format. Source 1 - Determines the time range for a given date based on the execution of a Job, which logically concludes the End of Day in Applicat... See more...
Two different sources returning data in the below format. Source 1 - Determines the time range for a given date based on the execution of a Job, which logically concludes the End of Day in Application. Source 2 – Events generated in real time for various use cases in the application. EventID1 is generated as part of the Job in Source1.   Source 1 DATE Start Time End Time Day 3 2023-09-12 01:12:12.123 2023-09-13 01:13:13.123 Day 2 2023-09-11 01:11:11.123 2023-09-12 01:12:12.123 Day 1 2023-09-10 01:10:10.123 2023-09-11 01:11:11.123   Source 2 Event type Time Others EventID2 2023-09-11 01:20:20.123   EventID1 2023-09-11 01:11:11.123   EventID9 2023-09-10 01:20:30.123   EventID3 2023-09-10 01:20:10.123   EventID5 2023-09-10 01:10:20.123   EventID1 2023-09-10 01:10:10.123     There are no common fields available to join the two sources other than the time at which the job is executed and at which the EventID1 is generated. Expectation is to logically group the events based on Date and derive the stats for each day. I'm new to Splunk and i would really appreciate if you guys can provide suggestions on how to handle this one.   Expected Result Date Events Count Day 1 EventID1 EventID2 EventID3 - - - EventID9 1 15 10 - - 8 Day 2 EventID1 EventID2 - - - EventID9 EventID11 1 2 - - 18 6              
  Hello, Has anyone experienced parsing Windows Event Logs using a KVstore for all of the generic verbiage?  For example - red text (general/static text associated with EventCode number and other ... See more...
  Hello, Has anyone experienced parsing Windows Event Logs using a KVstore for all of the generic verbiage?  For example - red text (general/static text associated with EventCode number and other values  - this will mostly be the Message/body fields) will be entered into a KVstore; green text (values within the event) will be indexed. In the below example, there are 2,150 characters, of which, 214 characters are dynamic, and need to be indexed. The red(undant) text contains over 1,930 characters.  These are just logon (4624) events. With over 11.5 million logon events per day across our environment, this is ~23 GB. If what I am asking can be/has been accomplished, we could reduce this to 2.3 GB. Thanks and God bless, Genesius   09/11/2023 12:00:00AM LogName=Security EventCode=4624 EventType=0 ComputerName=<computer name> SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=9696969696 Keywords=Audit Success TaskCategory=Logon OpCode=Info Message=An account was successfully logged on.   Subject:                 Security ID:                         NT AUTHORITY\SYSTEM                 Account Name:                 <account name>                 Account Domain:                              <account domain>                 Logon ID:                             0x000   Logon Information:                 Logon Type:                        3                 Restricted Admin Mode:               -                 Virtual Account:                No                 Elevated Token:                 Yes   Impersonation Level:                      Identification   New Logon:                 Security ID:                         <security id>                 Account Name:                 <account name>                 Account Domain:                              <account domain>                 Logon ID:                             0x0000000000                 Linked Logon ID:                               0x0                 Network Account Name:               -                 Network Account Domain:           -                 Logon GUID:                       <login guid>   Process Information:                 Process ID:                          0x000                 Process Name:                   D:\Program Files\Microsoft System Center\Operations Manager\Server\Microsoft.Mom.Sdk.ServiceHost.exe   Network Information:                 Workstation Name:         <workstation name>                 Source Network Address:             -                 Source Port:                        -   Detailed Authentication Information:                 Logon Process:                  <login process>                 Authentication Package: <authentication package>                 Transited Services:           -                 Package Name (NTLM only):        -                 Key Length:                         0   This event is generated when a logon session is created. It is generated on the computer that was accessed.   The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.   The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).   The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.   The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.   The impersonation level field indicates the extent to which a process in the logon session can impersonate.   The authentication information fields provide detailed information about this specific logon request.                 - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.                 - Transited services indicate which intermediate services have participated in this logon request.                 - Package name indicates which sub-protocol was used among the NTLM protocols.                 - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.