All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, for this question, I am referencing the documentation page: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradePathForUnprivilegedInstalls There are two sets of conflicting... See more...
Hello, for this question, I am referencing the documentation page: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradePathForUnprivilegedInstalls There are two sets of conflicting information, and I do not know how to proceed with my ON PREMISE, UNPRIVILEGED, PRIMARY + WARM STANDBY CONFIGURATION (database is on the instances, not external): At the top of the documentation, it states: Unprivileged Splunk SOAR (On-premises) running a release earlier than release 6.2.1 can be upgraded to Splunk SOAR (On-premises) release 6.2.1, and then to release 6.2.2. It says CAN BE. So.... is it optional? All deployments must upgrade to Splunk SOAR (On-premises) 6.2.1 before upgrading to higher releases in order to upgrade the PostgreSQL database. It says MUST UPGRADE. So.... is it mandatory? But then, towards the BOTTOM of the table, I'm looking at the row beginning with the entry stating that I am starting with version "6.2.0" Steps 1 & 2 are conditionals for clustered and external PostGreSQL databases. Step 3 goes directly to upgrading to 6.2.2. So..... Do I, or do I NOT, upgrade to 6.2.1 first? 
Not the most efficient way of doing what? You could improve the performance of the query by combining the first two commands. index=myindex (EventCode=4663 OR EventCode=4660) OR (EventID=2 OR Event... See more...
Not the most efficient way of doing what? You could improve the performance of the query by combining the first two commands. index=myindex (EventCode=4663 OR EventCode=4660) OR (EventID=2 OR EventID=3 OR EventID=11) OR (Processes="*del*.exe" OR Processes="*rm*.exe" OR Processes="*rmdir*.exe") process!="C:\\Windows\\System32\\svchost.exe" process!="C:\\Program Files\\Microsoft Advanced Threat Analytics\\Gateway\\Microsoft.Tri.Gateway.exe" process!="C:\\Program Files\\Common Files\\McAfee\\*" process!="C:\\Program Files\\McAfee*" process!="C:\\Windows\\System32\\enstart64.exe" process!="C:\\Windows\\System32\\wbem\\WmiPrvSE.exe" process!="C:\\Program Files\\Windows\\Audio\\EndPoint\\3668cba\\cc\\x64\\AudioManSrv.exe" | table _time, source, subject, object_file_path, SubjectUserName, process, result Legibility can be improved a little by the IN operator. index=myindex (EventCode=4663 OR EventCode=4660) OR (EventID=2 OR EventID=3 OR EventID=11) OR (Processes="*del*.exe" OR Processes="*rm*.exe" OR Processes="*rmdir*.exe") NOT process IN ("C:\\Windows\\System32\\svchost.exe" "C:\\Program Files\\Microsoft Advanced Threat Analytics\\Gateway\\Microsoft.Tri.Gateway.exe" "C:\\Program Files\\Common Files\\McAfee\\*" "C:\\Program Files\\McAfee*" "C:\\Windows\\System32\\enstart64.exe" "C:\\Windows\\System32\\wbem\\WmiPrvSE.exe" "C:\\Program Files\\Windows\\Audio\\EndPoint\\3668cba\\cc\\x64\\AudioManSrv.exe") | table _time, source, subject, object_file_path, SubjectUserName, process, result  
Howdy, Im fairly new to splunk and couldnt google the answer I wanted to Here we go.  I am trying to simplify my queries and filter down the search results better. Current example query:    ind... See more...
Howdy, Im fairly new to splunk and couldnt google the answer I wanted to Here we go.  I am trying to simplify my queries and filter down the search results better. Current example query:    index=myindex | search (EventCode=4663 OR EventCode=4660) OR (EventID=2 OR EventID=3 OR EventID=11) OR (Processes="*del*.exe" OR Processes="*rm*.exe" OR Processes="*rmdir*.exe") process!="C:\\Windows\\System32\\svchost.exe" process!="C:\\Program Files\\Microsoft Advanced Threat Analytics\\Gateway\\Microsoft.Tri.Gateway.exe" process!="C:\\Program Files\\Common Files\\McAfee\\*" process!="C:\\Program Files\\McAfee*" process!="C:\\Windows\\System32\\enstart64.exe" process!="C:\\Windows\\System32\\wbem\\WmiPrvSE.exe" process!="C:\\Program Files\\Windows\\Audio\\EndPoint\\3668cba\\cc\\x64\\AudioManSrv.exe" | table _time, source, subject, object_file_path, SubjectUserName, process, result   This is an just an example, I do this same way for multiple different fields, indexs  I know its not the most efficient way of doing it but I dont know any better ways. Usually Ill start broad and whittle down the things I know I'm not looking for.  Is there either a way to simplify this (I could possibly do regex but im not really good at that) or something else like this to make my life easier? such as combining all the results I want to filter for one field. Any and all help/criticism is appreciated.
This is the result of the snippet I posted.
I don't have a "saved search" for this query, unfortunately, as I'm not yet able to make an actual "saved search". Just trying to perform some filtering on the results of a search made within the da... See more...
I don't have a "saved search" for this query, unfortunately, as I'm not yet able to make an actual "saved search". Just trying to perform some filtering on the results of a search made within the dashboard without reloading the search. I've attempted what I think it is that you're proposing, but the "PostProcessTable"/"PostProcessSearch", which is supposed to load the job from the "BaseTable"/"BaseSearch" is not loading. Instead, it notes reads, "Waiting for input...".  I will note that I am on Splunk version 9.0.4, and the switch you pointed out "Access search results or metadata" reads as "Use search results or job status as tokens" in my version of Dashboard Studio. I'm not sure if the issue is: my version of splunk being 9.0.4 the fact that I'm not using a saved search or I'm implementing your proposal incorrectly (very very possible) See example snippet below: "visualizations": { "viz_A2Ecjpct": { "type": "splunk.table", "dataSources": { "primary": "ds_fpJiS8Hp" }, "title": "BaseTable" }, "viz_Ok7Uvz2b": { "type": "splunk.table", "title": "PostProcessTable", "dataSources": { "primary": "ds_q4BDo5Wr" } } }, "dataSources": { "ds_fpJiS8Hp": { "type": "ds.search", "options": { "query": "| makeresults count=5", "queryParameters": { "earliest": "-15m", "latest": "now" }, "enableSmartSources": true }, "name": "BaseSearch" }, "ds_q4BDo5Wr": { "type": "ds.search", "options": { "query": "| loadjob $ds_fpJiS8Hp:job.sid$", "enableSmartSources": true }, "name": "PostProcessSearch" } },  
Hi. Running 9.0.6 and a user (who is the owner)  can schedule REPORTS, but not DASHBOARDS. It's a CLASSIC dashboard (not the new fancy one Stooooodio one). Dashboards --> Find Dashboard --> Edit... See more...
Hi. Running 9.0.6 and a user (who is the owner)  can schedule REPORTS, but not DASHBOARDS. It's a CLASSIC dashboard (not the new fancy one Stooooodio one). Dashboards --> Find Dashboard --> Edit button --> NO 'Edit Schedule' Open dashboard, top right export, NO 'Schedule PDF' My local admin says 'maybe they changed something in 9.0.6), but I'm unconvinced until this legendary community agrees. "feels" like a permission missing is all.    
This is the solution that seem to be working .  Edited source code on dash.  (can change left, center, or right)   ], "options": { "tableFormat": { "align": "> table |pick(alignment)" "alignmen... See more...
This is the solution that seem to be working .  Edited source code on dash.  (can change left, center, or right)   ], "options": { "tableFormat": { "align": "> table |pick(alignment)" "alignment": [ "left" ]
Yes, you can do this.  As you chain the outputlookups, put the most broad search first. As you summarize the different items you need, you can write to additional lookup files using append, or even b... See more...
Yes, you can do this.  As you chain the outputlookups, put the most broad search first. As you summarize the different items you need, you can write to additional lookup files using append, or even bring in another file, do stats processing, and then write it back out.    <run your initial search, for the daily data> |outputlookup dailyfile.csv <add the fully daily info to the weekly file, or do whatever summation is necessary> |outputlookup append=true weeklyfile.csv <bring in existing monthly data, and summarize it. then write it back out> |append [|inputlookup monthlyfile.csv] |stats <summarize whatever> |outputlookup monthlyfile.csv    
You can have multiple outputlookup commands in the same search so you can append each week's results to the monthly lookup and then inputlookup at the end of the month to process the monthly results
Try something like this index=123 sourcetype = teams | search "Crestron Package Firmware version :" | rex field=_raw "Crestron Package Firmware version :\s+(?<CCSFirmware>\S*?)" | eval Time(utc)=str... See more...
Try something like this index=123 sourcetype = teams | search "Crestron Package Firmware version :" | rex field=_raw "Crestron Package Firmware version :\s+(?<CCSFirmware>\S*?)" | eval Time(utc)=strftime(_time, "%y-%m-%d %H:%M:%S") | table host Time(utc) CCSFirmware
That is what I ended up doing, but I want to know if there was another way like that! Looks like it is the only way...   Thank you! 
SPL is not a procedural language and does not have if...then...else... constructs Try something like this | lookup Payroll1.csv PARENTACCOUNT OUTPUT Product_Type as To_AccountID_99, AccountType as ... See more...
SPL is not a procedural language and does not have if...then...else... constructs Try something like this | lookup Payroll1.csv PARENTACCOUNT OUTPUT Product_Type as To_AccountID_99, AccountType as To_Account_99 | lookup Payroll2.csv PARENTACCOUNT, ID as PARENTID OUTPUT TYPE as To_AccountID_NOT_99, AccountType as To_Account_NOT_99 | eval To_AccountID= if(ID="99",To_AccountID_99,To_AccountID_NOT_99)
Try something like this index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | where date_wday="monday" OR d... See more...
Try something like this index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | where date_wday="monday" OR date_wday="tuesday" OR date_wday="wednesday" OR date_wday="thursday" OR date_wday="friday" | eval app=if(url_domain="www.perplexity.ai", url_domain, app) | table user, app, date_wday | stats count by user app date_wday | chart count by user app | join type=left user [search index=collect_identities | rename email as user | table user bunit]
Just spit balling just from my prior splunk experience.  I've scene similar issues arise when permissions are messed up from a splunk install directory perspective or if this service account is runni... See more...
Just spit balling just from my prior splunk experience.  I've scene similar issues arise when permissions are messed up from a splunk install directory perspective or if this service account is running as an incorrect user (i.e. root).  Customer has assured me neither is the case and that permissions are correct and service is running as the correct account
user bunit gemini perplexity openai user1@mail.com HR 1 1 0 user2@mail.com IT 0 1 1 This is the results that I am getting with the query without the bunit column which is wha... See more...
user bunit gemini perplexity openai user1@mail.com HR 1 1 0 user2@mail.com IT 0 1 1 This is the results that I am getting with the query without the bunit column which is what I want to add. So basically a join to see where the email=user (email is in index=collect_identities)
Below is my raw log   [08/28/2024 08:14:50] Current Device Info ... ****************************************************************************** Current Mode: Skull Teams Current Device name: x... See more...
Below is my raw log   [08/28/2024 08:14:50] Current Device Info ... ****************************************************************************** Current Mode: Skull Teams Current Device name: xxxxx  Crestron Package Environment version :1.00.00.004 Crestron Package Firmware version :1.17.00.040 Crestron Package Flex-Hub version :1.3.0127.00204 Crestron Package HD-CONV-USB-200 version :009.051    I want extract only  : Crestron Package Firmware version :xx.xx.xxx  I wrote a query like bleow , but not working , pls help index=123 sourcetype = teams | search "Crestron Package Firmware version :" | rex field=_raw ":\s+(?<CCSFirmware>.*?)$" | eval Time(utc)=strftime(_time, "%y-%m-%d %H:%M:%S") | table host Time(utc) CCSFirmware  
Hi all, hoping someone can help me.  We have a number of Windows servers with the Universal Forwarder installed (9.3.0) and they are configured to forward logs to an internal heavy forwarder server ... See more...
Hi all, hoping someone can help me.  We have a number of Windows servers with the Universal Forwarder installed (9.3.0) and they are configured to forward logs to an internal heavy forwarder server running Linux.  Recently we've seen crashes on the Windows servers which seem to be because Splunk-MonitorNoHandle is taking more and more RAM until there is none left. I have therefore limited the RAM that Splunk can take to stop the crashing. However, I need to understand the root cause.  It seems to me that the reason is because the HF is blocking the connection for some reason, and when that happens the Windows server has to cache the entries in memory. Once the connection is blocked, it never seems to unblock and the backlog just keeps getting bigger and bigger.  Here is an example from the log: 08-21-2024 16:42:13.223 +0100 WARN TcpOutputProc [6844 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=splunkhf02.mydomain.net inside output group default-autolb-group from host_src=WINDOWS02 has been blocked for blocked_seconds=54300. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. I tried setting maxKBps to 0 in limits.conf on the Windows server, I also tried 256 and 512 but we're still having the same problems. If I restart the Splunk service it 'solves' the issue but of course it also loses all of the log entries from the buffer in RAM.  Can anyone help me to understand the process here? Is the traffic being blocked by a setting on the HF? If so, then where could I find it to modify it? Or is it something on the Windows server itself? Thanks for any assistance!
When I search I want something like this: if(ID =99): then lookup 1, else: lookup 2. What I have right now is something like this, but I done know how to put it in the correct syntax:  | eval To_... See more...
When I search I want something like this: if(ID =99): then lookup 1, else: lookup 2. What I have right now is something like this, but I done know how to put it in the correct syntax:  | eval To_AccountID= if(ID="99", [search | lookup Payroll1.csv PARENTACCOUNT OUTPUT Product_Type as To_AccountID, AccountType as To_Account], [search | lookup Payroll2.csv PARENTACCOUNT, ID as PARENTID OUTPUT TYPE as To_AccountID, AccountType as To_Account]) What is the best way to code something like this??? 
Hello, I have a CSV file that I monitor via the Universal Forwarder (UF). I’m encountering an issue where sometimes I cannot find the fields in Splunk when i run index=myindex, even though they appe... See more...
Hello, I have a CSV file that I monitor via the Universal Forwarder (UF). I’m encountering an issue where sometimes I cannot find the fields in Splunk when i run index=myindex, even though they appear on other days. The CSV file does not contain a header, and the format of the file is the same every day (each day starts with an empty file that is populated later). Here is the props.conf configuration that I’m using:     [csv_hff] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = FIELD_NAMES = heure,id,num,id2,id3 INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = heure TIME_FORMAT = %d/%m/%Y %H:%M:%S category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true     Has anyone else encountered the same problem? Splunk version 9 Thank you
Set your primary data source to a search like this | loadjob $<data source which loads your saved search>:job.sid$ The primary search which loads your saved search need to allow access to  its met... See more...
Set your primary data source to a search like this | loadjob $<data source which loads your saved search>:job.sid$ The primary search which loads your saved search need to allow access to  its metadata e.g.