All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is the solution that seem to be working .  Edited source code on dash.  (can change left, center, or right)   ], "options": { "tableFormat": { "align": "> table |pick(alignment)" "alignmen... See more...
This is the solution that seem to be working .  Edited source code on dash.  (can change left, center, or right)   ], "options": { "tableFormat": { "align": "> table |pick(alignment)" "alignment": [ "left" ]
Yes, you can do this.  As you chain the outputlookups, put the most broad search first. As you summarize the different items you need, you can write to additional lookup files using append, or even b... See more...
Yes, you can do this.  As you chain the outputlookups, put the most broad search first. As you summarize the different items you need, you can write to additional lookup files using append, or even bring in another file, do stats processing, and then write it back out.    <run your initial search, for the daily data> |outputlookup dailyfile.csv <add the fully daily info to the weekly file, or do whatever summation is necessary> |outputlookup append=true weeklyfile.csv <bring in existing monthly data, and summarize it. then write it back out> |append [|inputlookup monthlyfile.csv] |stats <summarize whatever> |outputlookup monthlyfile.csv    
You can have multiple outputlookup commands in the same search so you can append each week's results to the monthly lookup and then inputlookup at the end of the month to process the monthly results
Try something like this index=123 sourcetype = teams | search "Crestron Package Firmware version :" | rex field=_raw "Crestron Package Firmware version :\s+(?<CCSFirmware>\S*?)" | eval Time(utc)=str... See more...
Try something like this index=123 sourcetype = teams | search "Crestron Package Firmware version :" | rex field=_raw "Crestron Package Firmware version :\s+(?<CCSFirmware>\S*?)" | eval Time(utc)=strftime(_time, "%y-%m-%d %H:%M:%S") | table host Time(utc) CCSFirmware
That is what I ended up doing, but I want to know if there was another way like that! Looks like it is the only way...   Thank you! 
SPL is not a procedural language and does not have if...then...else... constructs Try something like this | lookup Payroll1.csv PARENTACCOUNT OUTPUT Product_Type as To_AccountID_99, AccountType as ... See more...
SPL is not a procedural language and does not have if...then...else... constructs Try something like this | lookup Payroll1.csv PARENTACCOUNT OUTPUT Product_Type as To_AccountID_99, AccountType as To_Account_99 | lookup Payroll2.csv PARENTACCOUNT, ID as PARENTID OUTPUT TYPE as To_AccountID_NOT_99, AccountType as To_Account_NOT_99 | eval To_AccountID= if(ID="99",To_AccountID_99,To_AccountID_NOT_99)
Try something like this index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | where date_wday="monday" OR d... See more...
Try something like this index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | where date_wday="monday" OR date_wday="tuesday" OR date_wday="wednesday" OR date_wday="thursday" OR date_wday="friday" | eval app=if(url_domain="www.perplexity.ai", url_domain, app) | table user, app, date_wday | stats count by user app date_wday | chart count by user app | join type=left user [search index=collect_identities | rename email as user | table user bunit]
Just spit balling just from my prior splunk experience.  I've scene similar issues arise when permissions are messed up from a splunk install directory perspective or if this service account is runni... See more...
Just spit balling just from my prior splunk experience.  I've scene similar issues arise when permissions are messed up from a splunk install directory perspective or if this service account is running as an incorrect user (i.e. root).  Customer has assured me neither is the case and that permissions are correct and service is running as the correct account
user bunit gemini perplexity openai user1@mail.com HR 1 1 0 user2@mail.com IT 0 1 1 This is the results that I am getting with the query without the bunit column which is wha... See more...
user bunit gemini perplexity openai user1@mail.com HR 1 1 0 user2@mail.com IT 0 1 1 This is the results that I am getting with the query without the bunit column which is what I want to add. So basically a join to see where the email=user (email is in index=collect_identities)
Below is my raw log   [08/28/2024 08:14:50] Current Device Info ... ****************************************************************************** Current Mode: Skull Teams Current Device name: x... See more...
Below is my raw log   [08/28/2024 08:14:50] Current Device Info ... ****************************************************************************** Current Mode: Skull Teams Current Device name: xxxxx  Crestron Package Environment version :1.00.00.004 Crestron Package Firmware version :1.17.00.040 Crestron Package Flex-Hub version :1.3.0127.00204 Crestron Package HD-CONV-USB-200 version :009.051    I want extract only  : Crestron Package Firmware version :xx.xx.xxx  I wrote a query like bleow , but not working , pls help index=123 sourcetype = teams | search "Crestron Package Firmware version :" | rex field=_raw ":\s+(?<CCSFirmware>.*?)$" | eval Time(utc)=strftime(_time, "%y-%m-%d %H:%M:%S") | table host Time(utc) CCSFirmware  
Hi all, hoping someone can help me.  We have a number of Windows servers with the Universal Forwarder installed (9.3.0) and they are configured to forward logs to an internal heavy forwarder server ... See more...
Hi all, hoping someone can help me.  We have a number of Windows servers with the Universal Forwarder installed (9.3.0) and they are configured to forward logs to an internal heavy forwarder server running Linux.  Recently we've seen crashes on the Windows servers which seem to be because Splunk-MonitorNoHandle is taking more and more RAM until there is none left. I have therefore limited the RAM that Splunk can take to stop the crashing. However, I need to understand the root cause.  It seems to me that the reason is because the HF is blocking the connection for some reason, and when that happens the Windows server has to cache the entries in memory. Once the connection is blocked, it never seems to unblock and the backlog just keeps getting bigger and bigger.  Here is an example from the log: 08-21-2024 16:42:13.223 +0100 WARN TcpOutputProc [6844 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=splunkhf02.mydomain.net inside output group default-autolb-group from host_src=WINDOWS02 has been blocked for blocked_seconds=54300. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. I tried setting maxKBps to 0 in limits.conf on the Windows server, I also tried 256 and 512 but we're still having the same problems. If I restart the Splunk service it 'solves' the issue but of course it also loses all of the log entries from the buffer in RAM.  Can anyone help me to understand the process here? Is the traffic being blocked by a setting on the HF? If so, then where could I find it to modify it? Or is it something on the Windows server itself? Thanks for any assistance!
When I search I want something like this: if(ID =99): then lookup 1, else: lookup 2. What I have right now is something like this, but I done know how to put it in the correct syntax:  | eval To_... See more...
When I search I want something like this: if(ID =99): then lookup 1, else: lookup 2. What I have right now is something like this, but I done know how to put it in the correct syntax:  | eval To_AccountID= if(ID="99", [search | lookup Payroll1.csv PARENTACCOUNT OUTPUT Product_Type as To_AccountID, AccountType as To_Account], [search | lookup Payroll2.csv PARENTACCOUNT, ID as PARENTID OUTPUT TYPE as To_AccountID, AccountType as To_Account]) What is the best way to code something like this??? 
Hello, I have a CSV file that I monitor via the Universal Forwarder (UF). I’m encountering an issue where sometimes I cannot find the fields in Splunk when i run index=myindex, even though they appe... See more...
Hello, I have a CSV file that I monitor via the Universal Forwarder (UF). I’m encountering an issue where sometimes I cannot find the fields in Splunk when i run index=myindex, even though they appear on other days. The CSV file does not contain a header, and the format of the file is the same every day (each day starts with an empty file that is populated later). Here is the props.conf configuration that I’m using:     [csv_hff] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = FIELD_NAMES = heure,id,num,id2,id3 INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = heure TIME_FORMAT = %d/%m/%Y %H:%M:%S category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true     Has anyone else encountered the same problem? Splunk version 9 Thank you
Set your primary data source to a search like this | loadjob $<data source which loads your saved search>:job.sid$ The primary search which loads your saved search need to allow access to  its met... See more...
Set your primary data source to a search like this | loadjob $<data source which loads your saved search>:job.sid$ The primary search which loads your saved search need to allow access to  its metadata e.g.  
Thank you for this, to get in the same column , I renamed the "as overall_distinct_user_count" in the appendpipe command.    | eventstats dc(user_numbers) as overall_distinct_user_count | stats dc(... See more...
Thank you for this, to get in the same column , I renamed the "as overall_distinct_user_count" in the appendpipe command.    | eventstats dc(user_numbers) as overall_distinct_user_count | stats dc(user_numbers) as distinct_users_for_device, first(overall_distinct_user_count) as overall_distinct_user_count by device | appendpipe [stats max(overall_distinct_user_count) as distinct_users_for_device | eval device = "All Devices" ] | table device, distinct_users_for_device  
Try using Classic SimpleXML dashboards - Studio still has some catching up to do when compared to Classic
Hi @PaulPanther I believe I have access    
I'm familiar with "Chain Searching" - however, when chain searches execute, they also refresh the base search as well as all of the other linked chain searches. This is great for its use case. Howev... See more...
I'm familiar with "Chain Searching" - however, when chain searches execute, they also refresh the base search as well as all of the other linked chain searches. This is great for its use case. However, what I'm intending to do is have a base result set that I can then execute further queries/filters against to display filtered data without having to refresh/re-execute the base search. Similar to as if I were to use loadjob. The reason I can't use loadjob currently is because I cannot set the base search as a saved search, so I'm looking for a way around this. I also don't quite know how/if it's possible to implement loadjob <sid> into my dashboard based on a sid from another table within the dashboard.
Hi @PaulPanther Still same I'm Facing Issue How can check weather I have access to the summary index. could you please help me.
It is not clear what you are trying to do here - after the chart command, the app field no longer exists so the sort is meaningless. What are your expected results going to look like? How do events... See more...
It is not clear what you are trying to do here - after the chart command, the app field no longer exists so the sort is meaningless. What are your expected results going to look like? How do events in he collect_identities index relate to the events from the db_it_network index?