All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have this search, where I get the duration and I need to convert it to integer: Example: Min:Sec to Whole 00:02      to   1 00:16      to   1 01:53      to  2 09:20      to  10 ...etc S... See more...
I have this search, where I get the duration and I need to convert it to integer: Example: Min:Sec to Whole 00:02      to   1 00:16      to   1 01:53      to  2 09:20      to  10 ...etc Script: index="cdr" | search "Call.TermParty.TrunkGroup.TrunkGroupId"="2811" OR "Call.TermParty.TrunkGroup.TrunkGroupId"="2810" "Call.ConnectTime"=* "Call.DisconnectTime"=* |lookup Pais Call.RoutingInfo.DestAddr OUTPUT Countrie | eval Disctime=strftime('Call.DisconnectTime'/1000,"%m/%d/%Y %H:%M:%S %Q") | eval Conntime=strftime('Call.ConnectTime'/1000, "%m/%d/%Y %H:%M:%S%Q") | eval diffTime=('Call.DisconnectTime'-'Call.ConnectTime') | eval Duracion=strftime(diffTime/1000, "%M:%S") | table Countrie, Duración Spain 00:02 Spain 00:16 Argentina 00:53 Spain 09:20 Spain 02:54 Spain 28:30 Spain 01:18 Spain 00:28 Spain 16:40 Spain 00:03 Chile 00:25 Uruguay 01:54 Spain 01:54  Regards.  
This is actually quite simple question but give a good answer is extreme hard My answer or actually thinking is based on what I have learn e.g. in finance sector. I suppose that on almost all en... See more...
This is actually quite simple question but give a good answer is extreme hard My answer or actually thinking is based on what I have learn e.g. in finance sector. I suppose that on almost all enterprise grade business URL is just a start/execution point for real application. This means that when this endpoint is called it's just e.g. API Gateway request which are forwarder to one (usually several) backend(s) which are processing real request and then return needed response to client.  In tecnical point of view this means that there are a session (what user are doing in real life transaction) and this contains several request (those individual URLs) which are processing e.g. individual dashboard or step in real process. Usually there should be sessionId which are fixed for one real life transaction e.g. login into web bank and do what ever you are doing in one login (e.g. check balance, pay some invoices, transfer money etc.). Then there is requestId which is execution for one individual URL / process step (like see our account amount, check invoice, modify invoice, accept it into pay etc.). When you are think this workflow and which kind of event all those tens of subsystems are generating for click one entry point UR it's quite obviously that. you cannot define any DM which can describe this bunch of events. I suppose that you can do some DM for base audit data, but as payloads of different requests for backend systems are totally different it will be extremely hard to create generic DM for this. If/when needed you can do it by yourself, but quite probably it will be different for every customer or at least for every entry  point Just some thoughts not any real answer. r. Ismo.  
Hi I propose that you set up a test/lab system to test and document this change. There are some things which you need check and test. Are you using REST api for current users? This works different... See more...
Hi I propose that you set up a test/lab system to test and document this change. There are some things which you need check and test. Are you using REST api for current users? This works differently with SAML users How you avoid that used user accounts / userIDs didn't change or if those have change, how to migrate users private KOs users schedules something else which are depending on userID Are you need CLI access with your old LDAP users. This didn't work with SAML account or at least it needs some additional scripts or something else? Probably something else depending on which SAML idP you are using? r. Ismo
Hi I think that this is place for sub query like index=lalala source=lalala EventID=4728 AND PrimaryGroupId IN (512,516,517,518,519) AND [ search index=lalala source=lalala EventID=4720 | fields... See more...
Hi I think that this is place for sub query like index=lalala source=lalala EventID=4728 AND PrimaryGroupId IN (512,516,517,518,519) AND [ search index=lalala source=lalala EventID=4720 | fields UserName | dedup UserName | format ] In this way it first look those UserNames which has created and then that "outer" base search this those (UserName = "xxx" OR UserName = "yy"....) If you are looking for long period then maybe there is better options too. r. Ismo 
All fields should be there if those contains some values. You could debug it e.g. comment dedup away comment stats away and replace it with table Also if/when you are using verbose mode you can... See more...
All fields should be there if those contains some values. You could debug it e.g. comment dedup away comment stats away and replace it with table Also if/when you are using verbose mode you can see what values you have in Events tab. With Smart or Fast mode this tab is not available. r Ismo
One old post which do exactly what you are doing. https://community.splunk.com/t5/Getting-Data-In/sending-specific-events-to-nullqueue-using-props-amp-amp/m-p/660688
Here is the other one https://conf.splunk.com/files/2021/slides/PLA1410C.pdf
Yes, I want to take all logs but events with envoy in it. I am only using universal forwarder which I believe cant parse any data like a heavy forwarder. Am I mistaken? I made the transformation name... See more...
Yes, I want to take all logs but events with envoy in it. I am only using universal forwarder which I believe cant parse any data like a heavy forwarder. Am I mistaken? I made the transformation names unique and restarted splunk via GUI but still no discards. 
Please write all SPL etc. inside </> tags. That way those are easier to take into use. It also ensure that we can get the same SPL what you have write into your example.
Hi even you can mask that data in GUI, it didn't mean that you have really masked that data in Splunk. You must remember that after you have write it into bucket then it there and there is always a ... See more...
Hi even you can mask that data in GUI, it didn't mean that you have really masked that data in Splunk. You must remember that after you have write it into bucket then it there and there is always a way to get it out in plain text if/when you have access to GUI and can write SPL. Even you are using search time props.conf and transforms.conf. The only way is remove that data from index and reindex it again. And even the delete command is not enough if you have access to buckets on CLI level, you could get thet data back. The only way is let it go away with set frozen time enough low, then wait and then reindex it. r. Ismo
Thanks @bowesmana @yuanliu @gcusello for your help and input.   One thing I'm still missing is being able to populate values in all the fields listed. Let me explain with some screenshots for bette... See more...
Thanks @bowesmana @yuanliu @gcusello for your help and input.   One thing I'm still missing is being able to populate values in all the fields listed. Let me explain with some screenshots for better context -   The output of just corelight query shows values for the fields -   The output of just the firewall query shows values for all related fields -   However, the output of the suggested OR queries does not populate the values of many of the fields (highlighted in red)- eg (1).    eg (2).  I only see 5 values when using stats values (*) by * in the OR query (seems like it's just the fields that are common to both indices and none of others listed will be displayed?)     Any suggestions on this?   Thanks!
One way to do it is this | makeresults format=csv data="location, name location A, name A2 location B, name B1 location B, name B2 location C, name C1 location C, name C2 location C, name C3" | se... See more...
One way to do it is this | makeresults format=csv data="location, name location A, name A2 location B, name B1 location B, name B2 location C, name C1 location C, name C2 location C, name C3" | search name != "*2*" | stats count by location | append [| makeresults format=csv data="location, name location A, name A2 location B, name B1 location B, name B2 location C, name C1 location C, name C2 location C, name C3" | eval count=0 | fields location count | dedup location] | stats sum by location but as @PickleRick said, Splunk is not good with non existent values. r. Ismo
Hi bishida, I appreciate your response. Thanks for pointer to the development guide. I'll give it a try.
Hi Based on log you are running unsupported OS.   CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2   On Windows operating systems oldest supported version is Win 2019 or W... See more...
Hi Based on log you are running unsupported OS.   CONTROL [initandlisten] targetMinOS: Windows 7/Windows Server 2008 R2   On Windows operating systems oldest supported version is Win 2019 or Win 10. r. Ismo 
Use the appendpipe command to add synthetic results when the subsearch finds nothing. | where [ | loadjob $stoermeldungen_sid$ | where stoerCode IN ("S00") | addinfo | where ... See more...
Use the appendpipe command to add synthetic results when the subsearch finds nothing. | where [ | loadjob $stoermeldungen_sid$ | where stoerCode IN ("S00") | addinfo | where importZeit_unixF &gt;= relative_time(info_max_time,"-d@d") AND importZeit_unixF &lt;= relative_time(info_max_time,"@d") | stats count as dayCount by zbpIdentifier | sort -dayCount | head 10 | appendpipe [|stats count as Count | eval zbpIdentifier="Nothing found" | where Count=0 | fields - Count] | table zbpIdentifier ]  
Dear experts Based on the following search:  <search id="subsearch_results"> <query> search index="iii" search_name="nnn" Umgebung="uuu" isbName="isb" status IN ("ALREA... See more...
Dear experts Based on the following search:  <search id="subsearch_results"> <query> search index="iii" search_name="nnn" Umgebung="uuu" isbName="isb" status IN ("ALREADY*", "NO_NOTIF*", "UNCONF*", "NOTIF*") zbpIdentifier NOT 453-8888 stoerCodeGruppe NOT ("GUT*") | eval importZeit_unixF = strptime(importZeit, "%Y-%m-%dT%H:%M:%S.%N%Z") | eval importZeit_humanF = strftime(importZeit_unixF, "%Y-%m-%d %H:%M:%S") | table importZeit_humanF importZeit_unixF zbpIdentifier status stoerCode stoerCodeGruppe </query> <earliest>$t_time.earliest$</earliest> <latest>$t_time.latest$@d</latest> <done> <condition> <set token="stoermeldungen_sid">$job.sid$</set> </condition> </done> </search> I try to load some data with:  <query> | loadjob $stoermeldungen_sid$ | where stoerCode IN ("S00") | where [ | loadjob $stoermeldungen_sid$ | where stoerCode IN ("S00") | addinfo | where importZeit_unixF &gt;= relative_time(info_max_time,"-d@d") AND importZeit_unixF &lt;= relative_time(info_max_time,"@d") | stats count as dayCount by zbpIdentifier | sort -dayCount | head 10 | table zbpIdentifier ] | addinfo | where .... Basic idea:  the subsearch first derives the top 10 of the elements based on the number of yesterdays error messages.  based on the subsearch result then the 7 day history is read and displayed (not fully shown in the example above) All works fine except if there are no messages found by the subsearch. If yesterday no error messages of the given type were recorded, the subsearch returns a result which causes the following error message in the dashboard: Error in ´where´command: The expression is malformed. An unexpected character is reached at ´)´.  The where command is the one which should take the result of the subsearch (3rd line of code).  The error message is just not nice for the end user, better would be to get just an empty chart if no data is found.  The question is: How to fix the result of the subsearch in a way, that also the main search runs and gets the proper empty result, and therefore the empty graph instead of the "not nice" error message? Thank you for your help.
As I said above, there is a steep learning curve with SPL's JSON flattening schema.  But it is learnable, and the syntax is reasonably logical. (Logical, not intuitive or self-explanatory.) First, t... See more...
As I said above, there is a steep learning curve with SPL's JSON flattening schema.  But it is learnable, and the syntax is reasonably logical. (Logical, not intuitive or self-explanatory.) First, the easiest way to to examine each individual array element is by mvexpand.  Like | spath path=json.msg | spath input=json.msg path=query{} | mvexpand query{} | rename query{} as query_single Here, xxx{} is  SPL's explicity denotation of an array that is flattened from a structure; array is most commonly known in SPL as multivalued.  You will see a lot of this word in documentation. Second, if you only want the first element, simply take the first element using mvindex. | spath path=json.msg | spath input=json.msg path=query{} | eval first_query = mvindex('query{}', 0) Test these over any of the emulations @ITWhisperer and I supplied above, and compare with your real data.
The queue = nullQueue setting is not valid in props.conf.  Make sure the host and source names match that of the incoming data.  Consider adding a sourcetype stanza for the data. The stanzas belong... See more...
The queue = nullQueue setting is not valid in props.conf.  Make sure the host and source names match that of the incoming data.  Consider adding a sourcetype stanza for the data. The stanzas belong in the first full instance of Splunk that processes the data (indexers and HFs).  Put them in the default directory of a custom app.
Hi @Gorwinn , let me understand, you want to take all the exents except the ones containing the word "envoy", is it correct? at first, how are you taking these logs? if using an Heavy Forwarder, y... See more...
Hi @Gorwinn , let me understand, you want to take all the exents except the ones containing the word "envoy", is it correct? at first, how are you taking these logs? if using an Heavy Forwarder, you have to put the props.conf and transforms.conf on the first Splunk Full instance that data pass trhough, in other words on the Heavy Forwarder, if present or on the Indexer. then, the transformation names must me unique in props.conf:   [host::vcenter] TRANSFORMS-null = setnull [source::/var/log/remote/catchall/*/*.log] TRANSFORMS-null2 = setnull   then check the regex using the rex command in Splunk. Anyway, the issue usually is the location of the conf files (obviously I suppose that you restarted Splunk after conf files modification!). The documentation is at https://docs.splunk.com/Documentation/Splunk/9.4.0/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues  Ciao. Giuseppe
Hello All!  I am trying to discard a certain event before the Indexers Ingest it using keyword envoy. Below is an example timestamp vcenter envoy-access 2024-12-29T23:53:56.632Z info envoy[139855... See more...
Hello All!  I am trying to discard a certain event before the Indexers Ingest it using keyword envoy. Below is an example timestamp vcenter envoy-access 2024-12-29T23:53:56.632Z info envoy[139855859431232] [Originator@6876 sub=Default] 2024-12-29T23:53:50.392Z POST /sdk HTTP/1.1 200 via_upstream I tried creating props and transforms conf in  $SPLUNK_HOME/etc/system/local but it's not working. My questions are if my stanzas are correct and if I should put them in local directory? Appreciate any assistance you can provide, Thank you.  Props.conf [nullQueue] queue = nullQueue [host::vcenter] TRANSFORMS-null = setnull [source::/var/log/remote/catchall/(IPAddress of Vcenter)/*.log] TRANSFORMS-null = setnull transforms.conf [setnull] REGEX = envoy DEST_KEY = queue FORMAT = nullQueue