All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everyone, Here is the story, we have a search head cluster with three members, lets call them sh1, sh2, sh3. these 3 search heads are not in the same domain/vlan, so each one used to have its ... See more...
Hello everyone, Here is the story, we have a search head cluster with three members, lets call them sh1, sh2, sh3. these 3 search heads are not in the same domain/vlan, so each one used to have its own config of the SMTP server. Now we are having issues sending reports from Splunk. and I noticed that all 3 search heads are using just one SMTP server so the emails will not be delivered. I tried to put the correct config for each search head in .../system/local/alert_actions.conf but still not working. For now I will try to allow the search heads to communicate with all SMTP servers. but i am not sure it is the best solution. Is there a config I am missing about the email setting in a search head cluster? Thank you.
Ok, from what I understand you need something like <your_search> | stats list(module) as modules last(T) as T by transactionID | eval modules=mvjoin(modules," ") | stats count by modules T Depen... See more...
Ok, from what I understand you need something like <your_search> | stats list(module) as modules last(T) as T by transactionID | eval modules=mvjoin(modules," ") | stats count by modules T Depending on your sort order you might want first(T) instead of last(T)
I have been investigating a particular search an api user runs which has become markedly slower past a specific date.  When looking in the audittrail internal logs, what I noticed is that there is no... See more...
I have been investigating a particular search an api user runs which has become markedly slower past a specific date.  When looking in the audittrail internal logs, what I noticed is that there is no significant increase in event count, however the "total_slices" number significantly increases from before the date through after the date. I couldn't find much information in the documentation on what this value represents.  Does this mean the data within each event increased around that time?
I am created below query to get the hourly report of certain tasks. I go the final timechart values for four different "connectiontype" below. But I like to rename the column name to something else. ... See more...
I am created below query to get the hourly report of certain tasks. I go the final timechart values for four different "connectiontype" below. But I like to rename the column name to something else.  
To get a total number of users, use the stats command again. ... | stats count by IONS | where count >= 10 ``` So far we have one result per user. Count the number of results to get the number of u... See more...
To get a total number of users, use the stats command again. ... | stats count by IONS | where count >= 10 ``` So far we have one result per user. Count the number of results to get the number of users. ``` | stats count as IONS | rename IONS as "User IDs"  
Hey guys, Hope y'all are doing well! I wanted to experiment with Splunk's Deep Learning module to perform some tasks. As mentioned in the "barebone_template" there are two methods to pull data fro... See more...
Hey guys, Hope y'all are doing well! I wanted to experiment with Splunk's Deep Learning module to perform some tasks. As mentioned in the "barebone_template" there are two methods to pull data from splunk in. Because I want the data to be live, I want to be able to run a search inside the Jupiter notebook itself, hence proceeding with method 1. Method 1 is done using Splunk's "dsdlsupport" Python library. But when I used the same commands they have in their template, it throws the following error for their default settings: I wanted to check if someone has faced/solved this issue already before diving into their source code myself. Thank you and have a nice day   Best,
Hi @TheBravoSierra, please try this: blacklist7 = EventCode\=4673.*Process Name:\s*C:\\Program Files\\WindowsApps.*\\win32\\DesktopExtension\.exe Ciao. Giuseppe
I am not sure what you mean, you can use the same timepicker if you want, it depends on how you want your dashboard to work.
I also played around with the addcoltotals command but that only gives me the totals of the count.  I need the total of "User ID" 
Hey ITWhisperer as my results come from an index summary, would i need to have a separate timepicker to get the index summary date? or can i also use timepicker=_time?
I was able to successfully blacklist the below, so I am not sure why the difference. blacklist6 = EventCode=5156 Application_Name="\device\harddiskvolume3\gcti\tsrvciscocm\cisco_cucm_tserver_bu_2\... See more...
I was able to successfully blacklist the below, so I am not sure why the difference. blacklist6 = EventCode=5156 Application_Name="\device\harddiskvolume3\gcti\tsrvciscocm\cisco_cucm_tserver_bu_2\ciscocm_server.exe"  Application Name: \device\harddiskvolume3\gcti\tsrvciscocm\cisco_cucm_tserver_bu_2\ciscocm_server.exe
11/02/2023 10:28:49 AM LogName=Security EventCode=4673 EventType=0 ComputerName=XXXX SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=XXXX Keywords=Audit Failure TaskCate... See more...
11/02/2023 10:28:49 AM LogName=Security EventCode=4673 EventType=0 ComputerName=XXXX SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=XXXX Keywords=Audit Failure TaskCategory=Sensitive Privilege Use OpCode=Info Message=A privileged service was called. Subject: Security ID:XXXX Account Name:XXXX Account Domain:XXXX Logon ID:XXXX Service: Server: Security Service Name: - Process: Process ID:XXXX Process Name: C:\Program Files\WindowsApps\AD2F1837.myHP_25.52341.876.0_x64__v10z8vjag6ke6\win32\DesktopExtension.exe Service Request Information: Privileges: SeTcbPrivilege
Hi @TheBravoSierra, could you share a sample of your logs? Ciao. Giuseppe
Can someone help me with these regex on inputs.conf on universal forwarder? For some reason, isn't working. Much appreciated! blacklist7 = EventCode=4673 Process_Name="C:\Program Files\WindowsA... See more...
Can someone help me with these regex on inputs.conf on universal forwarder? For some reason, isn't working. Much appreciated! blacklist7 = EventCode=4673 Process_Name="C:\Program Files\WindowsApps\AD2F1837.myHP_25.52341.876.0_x64__v10z8vjag6ke6\win32\DesktopExtension.exe" blacklist8 = EventCode=4673 Process_Name="C:\Program Files\WindowsApps\AD2F1837.myHP_26.52343.948.0_x64__v10z8vjag6ke6\win32\DesktopExtension.exe"
Assuming your timepicker is called timepicker and you want to use sTime to filter your events, try something like this index=summary sourcetype=prod source=service DESCR="Central Extra" | dedup SI_S... See more...
Assuming your timepicker is called timepicker and you want to use sTime to filter your events, try something like this index=summary sourcetype=prod source=service DESCR="Central Extra" | dedup SI_START,NAME,DESCR | eval sTime=strptime(SI_START,"%Y-%m-%d %H:%M:%S") | where relative_time(now(),$timepicker.earliest$) <= sTime AND relative_time(now(),$timepicker.latest$) > sTime
The name of the field might come from the log but the name of the token doesn't have to match, if you can edit the dashboard, you can change the name of the token.
So I have a stand alone splunk instance with only data that is imported from botsv3 and I used these instructions for  Adjusting Splunk memory in settings: You can allocate more memory to Splunk by a... See more...
So I have a stand alone splunk instance with only data that is imported from botsv3 and I used these instructions for  Adjusting Splunk memory in settings: You can allocate more memory to Splunk by adjusting the settings in the limits.conf file. Locate this file in the Splunk installation directory and modify the max_mem setting to allocate more memory. This file typically resides in SPLUNK/etc/system/local/limits.conf.  I changed max_mem = <new value>MB And so far changing the max_mem from the original 200 mb to 6,144 mb to make it 6gb for splunk to use, it seems like I do not have the bad allocation issue anymore. I will continue monitoring for the error and update my comment if I run into the bad allocation error again. This solution may not work for everyones specific situation especially since you may enter an organization and the memory allocation has already been configured and you may not have permissions to change any configurations but if you are working just with a home lab and you are making your own configurations as the splunk admin this is a good place to start. Since none of the solutions seem to actually provide steps on how to make the actual adjustments for people that are learning I figured I would include some descriptive steps to this discussion so people can contribute their expertise for people that are learning. Please build on the discussion with actionable steps instead of replying that this solution may not work so people can actually learn what the solution steps are.
Hi All, I have a search query that allows me to pull results from an index summary. One of the fields is a time/date field. The data is pull from a database and is a schedule so the time in this f... See more...
Hi All, I have a search query that allows me to pull results from an index summary. One of the fields is a time/date field. The data is pull from a database and is a schedule so the time in this field is not the indexed field. I would like to search on the time field and have the below query which allows me to do this. However i would like to move this into a dashboard and have a timepicker. Is this possible to do this? I need to have a time picker to grab the correct index summary data, then again for the field.   index=summary sourcetype=prod source=service DESCR="Central Extra" | dedup SI_START,NAME,DESCR | eval sTime=strptime(SI_START,"%Y-%m-%d %H:%M:%S") | sort 0 -sTime | eval eventday=strptime(SI_START,"%Y-%m-%d %H:%M:%S") | bucket eventday span=1d | eval eventday=strftime(eventday,"%Y-%m-%d") | eval eventday1=strptime(eventday,"%Y-%m-%d") | eval min_Date=strptime("2023-10-11","%Y-%m-%d") | eval max_Date=strptime("2023-10-14","%Y-%m-%d") | where (eventday1 >= min_Date AND eventday1 < max_Date) | eval record=substr(CODE, -14, 1) | eval record=case(record==1,"YES", record==0,"NO") | stats count(eval(record="YES")) as events_record count(record) as events by NAME | eval percentage_record=(events/events_record)*100 | fillnull value=0 percentage_record | search percentage_record<100 | sort +percentage_record -events      
There's no OOTB feature, rather you can add tag/flag values in the search results itself and individual team members can just filter based on the flag. Let me know if you have any questions / though... See more...
There's no OOTB feature, rather you can add tag/flag values in the search results itself and individual team members can just filter based on the flag. Let me know if you have any questions / thoughts?
Hello @ITWhisperer @yuanliu , Thank you so much for your help Is it possible to do it in one stats, instead of two, so I can keep my previous original calculation? I currently have stats ip with... See more...
Hello @ITWhisperer @yuanliu , Thank you so much for your help Is it possible to do it in one stats, instead of two, so I can keep my previous original calculation? I currently have stats ip with the following result ip dc(vuln) dc(vuln) score > 0 count(vuln) sum(score) 1.1.1.1 3 2 7 23 2.2.2.2 3 1 4 10 After adding "stats values(score) as score by ip vuln"  above the current stats ip, count(vuln) no longer calculated the count of non distinct/original vuln   (7=>3,  4=>3) sum(score) no longer calculated the count of non distinct/original score  (23=>10, 10=>5) ip dc(vuln) dc(vuln) score > 0 count(vuln) sum(score) sum (dc(vuln) score > 0) 1.1.1.1 3 2 *3 *10 10 2.2.2.2 3 1 *3 *5 5 This is what I would like to have  ip dc(vuln) dc(vuln) score > 0 count(vuln) sum(score) sum (dc(vuln) score > 0) 1.1.1.1 3 2 7 23 10 2.2.2.2 3 1 4 10 5