All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi @ITWhisperer    i have removed round command and executed as below but still i am getting error    timechart span=mon eval(avg(properties.elapsed)) as AverageResponsetime   could you please ... See more...
hi @ITWhisperer    i have removed round command and executed as below but still i am getting error    timechart span=mon eval(avg(properties.elapsed)) as AverageResponsetime   could you please share me the correct one to resolve issue 
At the point that the round() function is evaluated, the avg() has not been calculated by the timechart command. You will need to do your rounding in a separate command after the timechart command.
"@w6" aligns to the beginning of the previous Saturday (which is not in the current week!). Try "@w6+1w"
Here is the situation Search web security appliance data (index=network sourcetype=cisco_wsa_squid) for non-business activity, i.e., usage values other than Business (usage!=Business) during the pr... See more...
Here is the situation Search web security appliance data (index=network sourcetype=cisco_wsa_squid) for non-business activity, i.e., usage values other than Business (usage!=Business) during the previous business week. And here is query i got for it index=network sourcetype=cisco_wsa_squid (usage!=Business) earliest=-7d@w1 latest=@w6. Could someone explain in latest why is it @w6 and not -7d@w6,  @w6  will not include current week's data ? #timemodiefiers
I am trying to run eval command to pull some stats but its erroring out as below  Error in 'timechart' command: The eval expression has no fields: 'avg(properties.elapsed)'. <search query>|timechar... See more...
I am trying to run eval command to pull some stats but its erroring out as below  Error in 'timechart' command: The eval expression has no fields: 'avg(properties.elapsed)'. <search query>|timechart span=mon eval(round(avg(properties.elapsed),2)) as AverageResponsetime where it was working perfectly fine in splunk enterprise system 
Hey @livehybrid , Thank you very much; that solved the problem! Now that it can calculate the sum of the attachments, how do I make sure that the search accumulates every event where User A sends t... See more...
Hey @livehybrid , Thank you very much; that solved the problem! Now that it can calculate the sum of the attachments, how do I make sure that the search accumulates every event where User A sends to the same recipient and calculates the sum of the overall traffic generated? Since I don't know how to put it to words properly, here's an example: E-Mail 1: from User A -> to User B with size=10MB - Was sent at 11:10 E-Mail 2: from User A -> to User B with size=8MB - Was sent at 12:14 E-Mail 3: from User A -> to User C with size=20MB - Was sent at 13:41 E-Mail 4: from User A -> to User B with size=23MB - Was sent at 13:55 As shown above, user A sent to two different recipients (B and C). I now want the search to sum up the overall traffic from A to recipient X over the span of 4 hours, like so:  Traffic of A to B = 41MB Traffic of A to C = 20MB Let's say the threshold of my search would be 40MB over the span of 4 hours. Could you also help me with that? Thank you very much so far!
Hi @Jailson , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
If you do  | stats earliest(_time) AS _time values(Object_Name) AS Object_Name BY ComputerName Process_Name You only have _time, Object_Name, ComputerName and Process_Name fields a... See more...
If you do  | stats earliest(_time) AS _time values(Object_Name) AS Object_Name BY ComputerName Process_Name You only have _time, Object_Name, ComputerName and Process_Name fields as output. Adding non-existing field in table command doesn't magically populate its contents. You need to add Initiating_Proces_Name either as aggregation with values() or as the BY field. The table command, BTW, is not needed after this stats.
yes we upgraded to 9.3 on windows server 2019
Hi @Skinny  I think you probably meant to use sum(All_Email.size) as size instead of values(All_Email.size) as size? Then it should sum the sizes rather than return a list. Please let me know how... See more...
Hi @Skinny  I think you probably meant to use sum(All_Email.size) as size instead of values(All_Email.size) as size? Then it should sum the sizes rather than return a list. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hey everyone, I am currently trying to write a search that monitors outgoing E-Mail traffic. The goal is to see if business-relevant information is being exfiltrated via E-Mail. Since I am new to wr... See more...
Hey everyone, I am currently trying to write a search that monitors outgoing E-Mail traffic. The goal is to see if business-relevant information is being exfiltrated via E-Mail. Since I am new to writing SPL I tried the following: First, I wanted to write a simple search that would show me all E-Mails where the size of the E-Mail is exceeding a set threshold. That's what I came up with: | datamodel Email search | search All_Email.src_user="SOMETHING I USE TO MAKE SURE THE TRAFFIC IS GOING FROM INTERNAL TO EXTERNAL" AND sourcetype="fml:*" | stats  values(_time) as _time  values(All_Email.src_user) as src_user  values(All_Email.recipient) as recipient  values(All_Email.file_name) as file_name  values(All_Email.subject) as subject  values(All_Email.size) as size  by All_Email.message_id | eval size_MB=round(size/1000000,3) | `ctime(alert_time)` | where 'size_MB'>X | fields - size As far as I can see, it does what I initially wanted it to do. Upon further testing and thinking, I noticed a flaw. If Data is exfiltrated over a given time through many different E-Mails, that search would not trigger since the threshold X would not be exceeded in one E-Mail. That's why I wanted to write a new Search using tstats (since the above search was pretty slow) where the traffic from A to the same recurring recipient is being added up in a given time period. If the accumulated traffic would exceed a given threshold, the search would trigger. I then came up with this: | tstats min(_time) as alert_time max(_time) as end_time values(All_Email.file_name) as file_name values(All_Email.subject) as subject values(All_Email.size) as size from datamodel=Email WHERE All_Email.src_user="SOMETHING I USE TO MAKE SURE THE TRAFFIC IS GOING FROM INTERNAL TO EXTERNAL" AND sourcetype="fml:*" by All_Email.src_user, All_Email.recipient | eval size_MB=round(size/1000000,3) This search is not finished (threshold missing, etc.) since I noticed that an E-Mail with multiple attachments does not calculate the size correctly. It lists all the sizes of the different attachments but does not calculate a sum. I think the "by All_Email.src_user, All_Email.recipient" statement does not work as I intended it to. I would be happy to get some feedback on how to improve. Maybe the Code I wrote is way to complicated or does not work as it's supposed to.  Since I am new to writing SPL, are there any standards on how to write clean SPL or any resources where I can study many different (good) searches so that I can improve in writing my own searches? I would appreciate any form of help! Thank you very much!
What is the nature of the data and field? For example, is it multi-value? Does it contain any special characters? In situations like this, it would be really helpful if you could share some sample an... See more...
What is the nature of the data and field? For example, is it multi-value? Does it contain any special characters? In situations like this, it would be really helpful if you could share some sample anonymised events (preferably in a code block </> to preserve formatting)?
i tried entering this with a slight tweak to the query: index=wineventlog source=wineventlog:security EventCode IN (4663,4688) Process_Name="*welcome.exe" | stats earliest(_time) AS _time ... See more...
i tried entering this with a slight tweak to the query: index=wineventlog source=wineventlog:security EventCode IN (4663,4688) Process_Name="*welcome.exe" | stats earliest(_time) AS _time values(Object_Name) AS Object_Name BY ComputerName Process_Name | table _time ComputerName Object_Name Process_Name Initiating_Process_Name​ , however this is my result: _time ComputerName Object_Name Process_Name Initiating_Process_Name 2025-03-19 16:00 ABCDE object.exe welcome.exe (blank)   I am still not able to get all 3 columns (object name, process name, initiating process name) into the same table.
I have a deployment server and i'm trying to push the app from deployment server to universal forwarders the 100 app has been pushed but the logs are not flowing into splunk.  when i install directl... See more...
I have a deployment server and i'm trying to push the app from deployment server to universal forwarders the 100 app has been pushed but the logs are not flowing into splunk.  when i install directly the 100 app in the boxes they are working as expected.
1. Is the certificate in PEM format? (openssl x509 will happily accept other formats) 2. Does the certificate match the private key?
OK. That might be simple, but not easy. But firstly, let's dig a bit into your search. It contains a subsearch. A subsearch is executed first and rendered into a set of conditions which are inserte... See more...
OK. That might be simple, but not easy. But firstly, let's dig a bit into your search. It contains a subsearch. A subsearch is executed first and rendered into a set of conditions which are inserted into the outer search. So there is no way to "relay" additional fields into the results. As simple as that. So you need another way (most probably some stats-based solution like the one shown by @gcusello ). But. Remember that subsearch has its limitations and at this moment you might actually not be getting correct results (even ignoring the lack of additional fields). The subsearch will get silently finalized after reaching execution timeout (by default it's 60 seconds) or results number (by default - 10k) and you will not be notified about this in any way. So you might actually be getting incomplete results without knowing it. OK. Back to the original issue. You have two data sets. One is produced by index=wineventlog source=wineventlog:security EventCode=4688 Another one by index=wineventlog source=wineventlog:security EventCode=4663 Object_Name="*hello.exe" Process_Name="*welcome.exe" As a side note, let me point out that searching for terms like "*hello.exe" and "*welcome.exe" is very inefficient since Splunk cannot use its internal indexes of terms to find those ones so it has to parse all events matching other conditions. If you can avoid it, don't use wildcards at the beginning of the search term. So while the general approach of searching for (index=wineventlog source=wineventlog:security EventCode=4688) OR (index=wineventlog source=wineventlog:security EventCode=4663 Object_Name="*hello.exe" Process_Name="*welcome.exe") Which can be  simplified to index=wineventlog source=wineventlog:security (EventCode=4688 OR (EventCode=4663 Object_Name="*hello.exe" Process_Name="*welcome.exe")) And then doing | stats values(field1) values(field2) <...> by commonfield1 commonfield2 <...> is sound and is the way to go in general and if it's slow, it's probably due to a) Amount of data you have to process b) The wildcarded search terms. If you can narrow it, it would be much much more efficient. Just for a test, try to search for index=wineventlog source=wineventlog:security EventCode=4663 Object_Name="*hello.exe" Process_Name="*welcome.exe" alone (maybe pass it to | stats count so that you don't have to drag all those events around; just check how long it takes to dig through the index). If it takes long, it means your original search (the one with the subsearch) was simply getting finalized early.
Hi @charlottelimcl , you have to correlate events using stats: index=wineventlog source=wineventlog:security EventCode IN (4663,4688) Process_Name="*welcome.exe" | stats earliest(_time) AS _... See more...
Hi @charlottelimcl , you have to correlate events using stats: index=wineventlog source=wineventlog:security EventCode IN (4663,4688) Process_Name="*welcome.exe" | stats earliest(_time) AS _time values(Object_Name) AS Object_Name BY ComputerName Process_Name | table _time ComputerName Process_Name Initiating_Process_Name​ Ciao Giuseppe
These are so general questions... It all depends on what data you have, what service you purchased (bare Splunk Cloud, ES, ITSI...). It's something that would be best discussed with your local frie... See more...
These are so general questions... It all depends on what data you have, what service you purchased (bare Splunk Cloud, ES, ITSI...). It's something that would be best discussed with your local friendly Splunk Partner who will sit with you, go through your needs (and budget constraints) and will suggest what can be done, how it can be done and how much it will cost.
Hi @KKuser  Do you have either IT Service Intelligence or Enterprise Security premium apps on Splunk Cloud? If you do this might significantly change how you approach this task.  These sound like a... See more...
Hi @KKuser  Do you have either IT Service Intelligence or Enterprise Security premium apps on Splunk Cloud? If you do this might significantly change how you approach this task.  These sound like a deliverable work item list but actually each should be broken down for some further analysis and collaboration with the stakeholder to determine exactly what they need, otherwise you may end up building something which is different to what they need (Been there, done that). A lot of these also depend on various other factors such as architecture, hosts, hosts type, infrastructure hosting provider (On Prem? VMware? AWS? Azure?) Do you already have all the data in Splunk for these data sources? If so, are the appropriate Technical Addons installed?  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
We worked with Splunk support to solve this. Recording the response since others might find it useful 1)phantom.get_base_url() helps access the URL set in the above screenshot (Base URL for Splunk... See more...
We worked with Splunk support to solve this. Recording the response since others might find it useful 1)phantom.get_base_url() helps access the URL set in the above screenshot (Base URL for Splunk SOAR) - Previous attempts did not work which is bizarre 2)Accessing environment variables     import os     import django      import sys           os.environ.setdefault("DJANGO_SETTINGS_MODULE", "phantom_ui.settings")     django.setup()          from phantom_ui.ui.models import SystemSettings     s = SystemSettings.get_settings()     envVars = s.environment_variables     phantom.debug(envVars) If your variable is called abc, you can now access its value in a variable by abcvalue = envVars['abc']['value'] 3) If your environment variable is stored as a secret , step 2 returns a salted variable which is no good for the authentication. Use the below to decrypt it before usage.  import encryption_helper clear_text_password = encryption_helper.decrypt(abcvalue, 'Splunk>Phantom') By using 2 and 3, you can programmatically access environment variables including secret tokens and avoid specifying plaintext auth creds in your code block / custom functions!