All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

user bunit gemini perplexity openai user1@mail.com HR 1 1 0 user2@mail.com IT 0 1 1 This is the results that I am getting with the query without the bunit column which is wha... See more...
user bunit gemini perplexity openai user1@mail.com HR 1 1 0 user2@mail.com IT 0 1 1 This is the results that I am getting with the query without the bunit column which is what I want to add. So basically a join to see where the email=user (email is in index=collect_identities)
Below is my raw log   [08/28/2024 08:14:50] Current Device Info ... ****************************************************************************** Current Mode: Skull Teams Current Device name: x... See more...
Below is my raw log   [08/28/2024 08:14:50] Current Device Info ... ****************************************************************************** Current Mode: Skull Teams Current Device name: xxxxx  Crestron Package Environment version :1.00.00.004 Crestron Package Firmware version :1.17.00.040 Crestron Package Flex-Hub version :1.3.0127.00204 Crestron Package HD-CONV-USB-200 version :009.051    I want extract only  : Crestron Package Firmware version :xx.xx.xxx  I wrote a query like bleow , but not working , pls help index=123 sourcetype = teams | search "Crestron Package Firmware version :" | rex field=_raw ":\s+(?<CCSFirmware>.*?)$" | eval Time(utc)=strftime(_time, "%y-%m-%d %H:%M:%S") | table host Time(utc) CCSFirmware  
Hi all, hoping someone can help me.  We have a number of Windows servers with the Universal Forwarder installed (9.3.0) and they are configured to forward logs to an internal heavy forwarder server ... See more...
Hi all, hoping someone can help me.  We have a number of Windows servers with the Universal Forwarder installed (9.3.0) and they are configured to forward logs to an internal heavy forwarder server running Linux.  Recently we've seen crashes on the Windows servers which seem to be because Splunk-MonitorNoHandle is taking more and more RAM until there is none left. I have therefore limited the RAM that Splunk can take to stop the crashing. However, I need to understand the root cause.  It seems to me that the reason is because the HF is blocking the connection for some reason, and when that happens the Windows server has to cache the entries in memory. Once the connection is blocked, it never seems to unblock and the backlog just keeps getting bigger and bigger.  Here is an example from the log: 08-21-2024 16:42:13.223 +0100 WARN TcpOutputProc [6844 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=splunkhf02.mydomain.net inside output group default-autolb-group from host_src=WINDOWS02 has been blocked for blocked_seconds=54300. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. I tried setting maxKBps to 0 in limits.conf on the Windows server, I also tried 256 and 512 but we're still having the same problems. If I restart the Splunk service it 'solves' the issue but of course it also loses all of the log entries from the buffer in RAM.  Can anyone help me to understand the process here? Is the traffic being blocked by a setting on the HF? If so, then where could I find it to modify it? Or is it something on the Windows server itself? Thanks for any assistance!
When I search I want something like this: if(ID =99): then lookup 1, else: lookup 2. What I have right now is something like this, but I done know how to put it in the correct syntax:  | eval To_... See more...
When I search I want something like this: if(ID =99): then lookup 1, else: lookup 2. What I have right now is something like this, but I done know how to put it in the correct syntax:  | eval To_AccountID= if(ID="99", [search | lookup Payroll1.csv PARENTACCOUNT OUTPUT Product_Type as To_AccountID, AccountType as To_Account], [search | lookup Payroll2.csv PARENTACCOUNT, ID as PARENTID OUTPUT TYPE as To_AccountID, AccountType as To_Account]) What is the best way to code something like this??? 
Hello, I have a CSV file that I monitor via the Universal Forwarder (UF). I’m encountering an issue where sometimes I cannot find the fields in Splunk when i run index=myindex, even though they appe... See more...
Hello, I have a CSV file that I monitor via the Universal Forwarder (UF). I’m encountering an issue where sometimes I cannot find the fields in Splunk when i run index=myindex, even though they appear on other days. The CSV file does not contain a header, and the format of the file is the same every day (each day starts with an empty file that is populated later). Here is the props.conf configuration that I’m using:     [csv_hff] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = FIELD_NAMES = heure,id,num,id2,id3 INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = heure TIME_FORMAT = %d/%m/%Y %H:%M:%S category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true     Has anyone else encountered the same problem? Splunk version 9 Thank you
Set your primary data source to a search like this | loadjob $<data source which loads your saved search>:job.sid$ The primary search which loads your saved search need to allow access to  its met... See more...
Set your primary data source to a search like this | loadjob $<data source which loads your saved search>:job.sid$ The primary search which loads your saved search need to allow access to  its metadata e.g.  
Thank you for this, to get in the same column , I renamed the "as overall_distinct_user_count" in the appendpipe command.    | eventstats dc(user_numbers) as overall_distinct_user_count | stats dc(... See more...
Thank you for this, to get in the same column , I renamed the "as overall_distinct_user_count" in the appendpipe command.    | eventstats dc(user_numbers) as overall_distinct_user_count | stats dc(user_numbers) as distinct_users_for_device, first(overall_distinct_user_count) as overall_distinct_user_count by device | appendpipe [stats max(overall_distinct_user_count) as distinct_users_for_device | eval device = "All Devices" ] | table device, distinct_users_for_device  
Try using Classic SimpleXML dashboards - Studio still has some catching up to do when compared to Classic
Hi @PaulPanther I believe I have access    
I'm familiar with "Chain Searching" - however, when chain searches execute, they also refresh the base search as well as all of the other linked chain searches. This is great for its use case. Howev... See more...
I'm familiar with "Chain Searching" - however, when chain searches execute, they also refresh the base search as well as all of the other linked chain searches. This is great for its use case. However, what I'm intending to do is have a base result set that I can then execute further queries/filters against to display filtered data without having to refresh/re-execute the base search. Similar to as if I were to use loadjob. The reason I can't use loadjob currently is because I cannot set the base search as a saved search, so I'm looking for a way around this. I also don't quite know how/if it's possible to implement loadjob <sid> into my dashboard based on a sid from another table within the dashboard.
Hi @PaulPanther Still same I'm Facing Issue How can check weather I have access to the summary index. could you please help me.
It is not clear what you are trying to do here - after the chart command, the app field no longer exists so the sort is meaningless. What are your expected results going to look like? How do events... See more...
It is not clear what you are trying to do here - after the chart command, the app field no longer exists so the sort is meaningless. What are your expected results going to look like? How do events in he collect_identities index relate to the events from the db_it_network index?
Thanks for the suggestion to append a space to the string..    I have tried : | eval new_field = existing_field + " " and : | eval new_field = existing_field + " "   both show it adjust... See more...
Thanks for the suggestion to append a space to the string..    I have tried : | eval new_field = existing_field + " " and : | eval new_field = existing_field + " "   both show it adjusted in the statistics page but not on the Dashboard.   
i wanted to omit data for non-business hours and weekends. i have tried this below query and not getting any results - added this portion -->  eval hour = tonumber(strftime(_time,"%H")) | eval dow =... See more...
i wanted to omit data for non-business hours and weekends. i have tried this below query and not getting any results - added this portion -->  eval hour = tonumber(strftime(_time,"%H")) | eval dow = tonumber(strftime(_time,"%w")) | where hour>=6 AND hour<=18 AND dow!=0 AND dow!=6 | mstats sum(builtin:apps.web.actionCount.load.browser:parents) As "Load_Count1",avg(builtin:apps.web.visuallyComplete.load.browser:parents) As "Avg_Load_Response1",sum(builtin:apps.web.actionCount.xhr.browser:parents) As "XHR_Count1",avg(builtin:apps.web.visuallyComplete.xhr.browser:parents) As "Avg_Xhr_Response1" where index=itsi_im_metrics AND source.name="DT_Prod_SaaS" AND entity.browser.name IN ("Desktop Browser","Mobile Browser") AND entity.application.name ="xxxxx" earliest=-31d@d latest=@d-1m by entity.application.name | eval hour = tonumber(strftime(_time,"%H")) | eval dow = tonumber(strftime(_time,"%w")) | where hour>=6 AND hour<=18 AND dow!=0 AND dow!=6 | eval Avg_Load_Response1=round((Avg_Load_Response1/1000),2),Avg_Xhr_Response1=round((Avg_Xhr_Response1/1000),2),Load_Count1=round(Load_Count1,0),XHR_Count1=round(XHR_Count1,0) | table entity.application.name,Avg_Load_Response1  
Hey PaulPanther   Sorry for the delayed response.  Yes this is for every user.
Currently have an active case open. Will gladly share the results when I get them!
Running queries on really large sets of data, and sending the output to an outputlookup works well for weekly refreshed dashboards. Is there a way to have some numbers from the initial report go into... See more...
Running queries on really large sets of data, and sending the output to an outputlookup works well for weekly refreshed dashboards. Is there a way to have some numbers from the initial report go into a separate, second outputlookup for monthly tracking?  For example a weekly report or dashboard shows me details on a daily basis, and the weekly summary - great.  Now the weekly summary should go additionally to a separate file for the monthly view. Is there a way to 'tee' results to different outputlookups? 
Numbers are usually aligned to the right, strings are aligned to the left. If the string contains only numbers, it may be aligned in a table panel to the right. To force it to remain as a string (and... See more...
Numbers are usually aligned to the right, strings are aligned to the left. If the string contains only numbers, it may be aligned in a table panel to the right. To force it to remain as a string (and be aligned to the left), you could append a space to the string.
Good day, I have a query that I would like to add more information onto. The query pulls all users that accessed a AI site and gives my data for weekdays as a 1 or 0 if the site was accessed. The que... See more...
Good day, I have a query that I would like to add more information onto. The query pulls all users that accessed a AI site and gives my data for weekdays as a 1 or 0 if the site was accessed. The query 1 gets a user from index db_it_network and I would like to add the department of each user by querying theindex=collect_identities sourcetype=ldap:query The users are displayed in the collect identities index as 'email' and their department in the bunit field    index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | where date_wday="monday" OR date_wday="tuesday" OR date_wday="wednesday" OR date_wday="thursday" OR date_wday="friday" | eval app=if(url_domain="www.perplexity.ai", url_domain, app) | table user, app, date_wday | stats count by user app date_wday | chart count by user app | sort app 0      Note: the |stats | chart is necessary to distinct so that one user return results for one app per day
I can get a numeric table aligned to the left in the statistics field with the  | eval count=printf("%-10d",<your_field>)  However the alignment does not translate to the dashboard.     Any insight... See more...
I can get a numeric table aligned to the left in the statistics field with the  | eval count=printf("%-10d",<your_field>)  However the alignment does not translate to the dashboard.     Any insight on why this does work or if there is another way to align numeric results to the right on a dashboard for aesthetic purposes?