All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have this date string example: Mon, 01 May 2023 00:00:00 GMT how can I convert it to epoch?    thanks!
Hello, I sat for Splunk Certified Cybersecurity Defense Analyst (CDA) on the 14th of September, 2023 and this is a week after and no result yet. How can i access my result?  
I have a number of Lookups that I create with similar naming convention (and plan to create more in the future). I want to be able to have a saved search that searches across all these lookups. The... See more...
I have a number of Lookups that I create with similar naming convention (and plan to create more in the future). I want to be able to have a saved search that searches across all these lookups. The following does not work as subsearches returned provide a litsearch of  (title="chosen_lookups_abcd" OR title="chosen_lookups_bcda" )  |inputlookup append=t [|rest /servicesNS/-/-/data/lookup-table-files | search title=chosen_lookups* | table title ]   Ideally, something like the following would work | inputlookup append=t chosen_lookups*   Thanks in advance 
It seems to me that you need the section name to be part of the attribute name. Try something like this | foreach Attributes.* [| eval name=SectionName.".<<MATCHSEG1>>" | eval {name}='<<FIEL... See more...
It seems to me that you need the section name to be part of the attribute name. Try something like this | foreach Attributes.* [| eval name=SectionName.".<<MATCHSEG1>>" | eval {name}='<<FIELD>>'] | fields - Attributes.* name SectionName | fillnull value="" | transpose column_name=Attribute header_field=Object | eval match = if('HJn5server1' == 'HJn7server3', "y", "n")
Thank you @shnmugam  Will try the same
Can someone help me to understand my above query please?
There several solution, I usually prefer the one described in the Splunk Dashboard Examples App (https://splunkbase.splunk.com/app/1603) in the example "link switch". In this way you can have in the... See more...
There several solution, I usually prefer the one described in the Splunk Dashboard Examples App (https://splunkbase.splunk.com/app/1603) in the example "link switch". In this way you can have in the same dashboard more dashboards, each one with a limited set of panels, and pass from one to another using the link switch in the header. Ciao. Giuseppe
Hi @Zane, no, this parameter is when you rotate a file on the same folder, but, if I correctly understood, yu move it in another folder. The solution is to put under monitoring also the destination... See more...
Hi @Zane, no, this parameter is when you rotate a file on the same folder, but, if I correctly understood, yu move it in another folder. The solution is to put under monitoring also the destination folder to tale only the events between the last read and the rotation, remembering that you cannot use crcSalt = <SOUCE> in your inputs.conf. Ciao. Giuseppe
hello Team,   We are trying to develop a function to validate if a user id and pwd are valid(in an artifact). Thought of using an LDAP BIND unfortunately we are getting an error that we cannot impo... See more...
hello Team,   We are trying to develop a function to validate if a user id and pwd are valid(in an artifact). Thought of using an LDAP BIND unfortunately we are getting an error that we cannot import LDAP3. has anyone developed an app or function to test this?
| makeresults format=csv data="process,message A,message 0 B,message 0 A,message 1 B,message 1 A,message 2 B,message 2 A,message 1 B,message 3 A,message 2 A,message 1 A,message 2" | eventstats count ... See more...
| makeresults format=csv data="process,message A,message 0 B,message 0 A,message 1 B,message 1 A,message 2 B,message 2 A,message 1 B,message 3 A,message 2 A,message 1 A,message 2" | eventstats count as repeats by process message | where repeats > 1 As you can see, only messages that are repeated are shown
Your input set up seems a little unconventional, please confirm how you have actually set up the inputs and where all the tokens are coming from
Hi Splunkers, I have a huge report with 15 to 20 pages worth of information which I need to show in a dashboard panel. Is there any way that I can add “expand” “collapse” option to showcase my data ... See more...
Hi Splunkers, I have a huge report with 15 to 20 pages worth of information which I need to show in a dashboard panel. Is there any way that I can add “expand” “collapse” option to showcase my data in a better way especially for the non splunk users to understand it better. Thanks
Sorry - but how does this pick up a set of messages on the same process repeating?
Hi  @gcusello  thanks for your answer, but I can‘t control delayed the rotation, due to those log file not managed by us, so if it's possible,adjusting from the Splunk side would be great. so as y... See more...
Hi  @gcusello  thanks for your answer, but I can‘t control delayed the rotation, due to those log file not managed by us, so if it's possible,adjusting from the Splunk side would be great. so as you said, "you can delay the rotation (30/60 seconds are sufficient) so Splunk will read also the last event in the file",  according to this, i found there is a parameter in inputs.conf, "time_before_close", it's 3 by default,  can i adjust this value so as to delay UF close monitored files?for example, set it as 30? thanks so much. \Zane
| eventstats count as repeats by process message | where repeats > 1
The messages are valid but once starting to loop indicates issue with process - messages can be from different processes but I am only interested in messages repeating on same process.
You could put your collect in an appendpipe in your alert search, something like this <your search> | appendpipe [| rename x as y | table y | collect index=other | where false()] | fields... See more...
You could put your collect in an appendpipe in your alert search, something like this <your search> | appendpipe [| rename x as y | table y | collect index=other | where false()] | fields - y  
It is possible but it depends on your messages. Are the messages from the process unique apart from when they are repeated? Can you correlate messages from the same instance of the process without ... See more...
It is possible but it depends on your messages. Are the messages from the process unique apart from when they are repeated? Can you correlate messages from the same instance of the process without confusing them with messages from another instance of the process? Are the loops any bigger or smaller than two messages? What do you need to be kept in the report, e.g. all messages, just the process id?, just the time of the first duplicated message, just the fact that a process has looped?
Hi @Manish_Sharma, you can give grants to a role to access an app, then you can give ro the role access to one or more indexes, but when a role has access to one index, you cannot restrict access to... See more...
Hi @Manish_Sharma, you can give grants to a role to access an app, then you can give ro the role access to one or more indexes, but when a role has access to one index, you cannot restrict access to a part of it. If you need to do this you have to apply one of the following workaround: in the app, creare distinct dashboard for each role, displaying only the permitted events and disabling access to the direct search form, you can schedule a search that exports the data for the limited users in a summary index (you don't have additional costs) and give access to the restricted uses only to the Summary index. I prefer the second one that's easier. Ciao. Giuseppe
Hi @bobmccoy, this surely is a very slow search, avoid to use join command, Splunk isn't a DB! let me understand: you want all the notables for the users in the user_watchlist lookup, is it correct... See more...
Hi @bobmccoy, this surely is a very slow search, avoid to use join command, Splunk isn't a DB! let me understand: you want all the notables for the users in the user_watchlist lookup, is it correct? If tis is your requirement, you could try something like this: index=notable [ | inputlookup user_watchlist WHERE _key=* | rename _key as user asset AS src | fields user src | dedup user asset ] | where isnotnull(src) | mvexpand src | mvexpand user | dedup src user | eval user=mvindex(split(user,"@"),0) | rename src as asset | eval asset=lower(asset) Ciao. Giuseppe