All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @karthi2809 , I do not understand the use of mvmap command here. Generally, mvmap command is used to perform some iterative operations on the multivalue field. Your SPL currently interpretes a... See more...
Hello @karthi2809 , I do not understand the use of mvmap command here. Generally, mvmap command is used to perform some iterative operations on the multivalue field. Your SPL currently interpretes as you're trying to map ImpConReqId field with  following string: "ImpConReqId: <<value of ImpConReqId>>". And if the "if condition" fails, the value gets updated to null() and then ImpConReqId gets mapped with null() value. I would suggest you to first filter out the null values using isnull() or isnotnull() functions and then perform multi value operations. Also, if you can share the full SPL query, it would be helpful to assist you better.   Thanks, Tejas.
Hi @Siddharthnegi , yes, when you save the panel (creating the dashboard), you can setup a starting zoom level and default starting cohordinates that you save with the dashboard. Ciao. Giuseppe
Hi @norbertt911, this isn't a Splunk question, but a Linux question. Anyway, we had a similar issue with rsyslog and we soved changing the default template: in rsysog, for each rule, you have dyna... See more...
Hi @norbertt911, this isn't a Splunk question, but a Linux question. Anyway, we had a similar issue with rsyslog and we soved changing the default template: in rsysog, for each rule, you have dynafile (in which you insert the template addressing the file to write) and template (by default "rsyslog-fmt", that you use to give a format to your output). Ciao. Giuseppe
what do you mean by predefined by you?
Hi @Siddharthnegi , I don't think that's possible: the zoom level in a map is predefined by you when you created the panel. You can only manually modify it, using the buttons in the map or your mou... See more...
Hi @Siddharthnegi , I don't think that's possible: the zoom level in a map is predefined by you when you created the panel. You can only manually modify it, using the buttons in the map or your mouse. Ciao. Giuseppe
hello , I have a dashboard in which there are many panels and in each panels I am using geostats command to show the results of the search of  that particular panel in world map. I want to add zoom f... See more...
hello , I have a dashboard in which there are many panels and in each panels I am using geostats command to show the results of the search of  that particular panel in world map. I want to add zoom feature in it.  Let me explain So lets say I am on panel 1 and i have zoom on America  to see in which area are the results showing just like this. Now what I want is that if I switch to different panel it should also be zoomed in from America. Is that possible.
Hi All, I want to filter out null values.In my field the ImpCon having null values.Now i want to filter the values which i dont want to show in the table.I am trying below query .which is showing t... See more...
Hi All, I want to filter out null values.In my field the ImpCon having null values.Now i want to filter the values which i dont want to show in the table.I am trying below query .which is showing the null values. | eval ImpCon=mvmap(ImpConReqID,if(match(ImpConReqID,".+"),"ImpConReqID: ".ImpConReqID,null())) | eval orcaleid=mvfilter(isnotnull(oracle)) | eval OracleResponse=mvjoin(orcaleid," ")
Update in case anyone tried testing to see if "append" option exists, the "append" option does actually save but appears to not work.  
Specifically speaking the dataSources section discussed here: https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/dashDef#The_dataSources_section   Hypothetically, I have two tables, eac... See more...
Specifically speaking the dataSources section discussed here: https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/dashDef#The_dataSources_section   Hypothetically, I have two tables, each stored in individual data source stanzas: Table 1 = ds.search stanza 1 Table 2 = ds.search stanza 2 The goal is to append the tables together, and then use the "stats join" method to merge the two tables together. If possible, this merge could be done as a ds.chain type stanza with two extend options, but it does not appear to be allowed. Here's the documentation for Data source options. https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/dsOpt  The document seems to be missing options like "extend", so I'm hoping someone knows if there's any additional options that is hidden. Now, I am trying to avoid using the [] subsearches because of 50,000 row limit, so the following append command will not be desired: <base search> | append [search ....] Anyone with mastery of JSON hacks might know if appending two data sources stanzas together be possible? Thank you.
hi , i have installed Splunk on my Ubuntu desktop. i have logged in once . however during second time log in it said unable to connect  
Dang it I don’t know how I missed that. Thank you.
You can also try this link:   http://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements
Hi, Yes. Use sourcetype = app_alert_data in the input stanza combined with a props configuration similar to what I shared. The props stanza uses a different sourcetype setting to specifically set th... See more...
Hi, Yes. Use sourcetype = app_alert_data in the input stanza combined with a props configuration similar to what I shared. The props stanza uses a different sourcetype setting to specifically set the following values: invalid_cause = archive is_valid = Fals The combination of the two sourcetype stanzas gives you both preprocessing by the archive processor and parsing of the processed data by the app_alert_data stanza.
What is the relationship between ID and Event, because you don't appear to be doing anything with ID in you  current search. Does Event exist in your second dataset (ERROR API [ID]) Assuming that yo... See more...
What is the relationship between ID and Event, because you don't appear to be doing anything with ID in you  current search. Does Event exist in your second dataset (ERROR API [ID]) Assuming that your purpose of trying to join in the first place on ID is because you don't have Event in the second dataset and ID has a 1:1 relationship with Event, then try this index=abcd ("API : access : * : process : Payload:") OR ("API" AND ("Couldn't save")) ``` You could combine this rex into a single statement to extract ID for both INFO and ERROR cases if you can make the regex ``` | rex "\[INFO \] \[.+\] \[(?<InfoID>.+)\] \:" | rex "\[ERROR\] \[API\] \[(?<ErrorID>.+)\] \:" ``` Assume this will ONLY occur in Info events, so will be null for ERROR ``` | rex " access : (?<Event>.+) : process" ``` Get the common ID ``` | eval ID=coalesce(InfoID, ErrorID) ``` t is a base event, forms Total ``` | eval t=if(isnotnull(InfoID), 1, 0) ``` Summing 't' gives total and count of unique ErrorID gives failed ``` | stats sum(t) as Total dc(ErrorID) as Failed values(Event) as Event by ID | eval Success=Total-Failed This uses the unique count of ErrorID to determine failures, which is effectively the dedup of the error Id, but it is assuming that one ID is one Event so at the end with the values(Event) it will have the Event extracted from the Info events and join on the common ID. Hope this helps and always use the principle that join in Splunk is never a good way to go.
%U is week of year https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Commontimeformatvariables#Specifying_days_and_weeks You can easily do the math to work out which week of month ... See more...
%U is week of year https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Commontimeformatvariables#Specifying_days_and_weeks You can easily do the math to work out which week of month it is based on your start day of the week. See this example which calculates the week number with either a start of week being Sunday or Monday. | makeresults count=31 | streamstats c | eval _time=strptime(printf("2024-03-%02d", c), "%F") | fields - c | eval day_of_week=strftime(_time, "%A") | eval day_of_month=strftime(_time, "%d") | eval wday_sunday_start=strftime(_time, "%w"), wday_monday_start=if(wday_sunday_start=0,7,wday_sunday_start) | eval week_of_month_sunday_start=ceil(max((day_of_month-wday_sunday_start), 0) / 7) + 1 | eval week_of_month_monday_start=ceil(max((day_of_month-wday_monday_start), 0) / 7) + 1  
Good afternoon, Yes, I am most assuredly not on AWS, but running an on-premise solution.  This means that I cannot archive off to S3 buckets, which are an AWS thing (for the most part). For your su... See more...
Good afternoon, Yes, I am most assuredly not on AWS, but running an on-premise solution.  This means that I cannot archive off to S3 buckets, which are an AWS thing (for the most part). For your suggested solutions, can you point me towards the relevant documentation or add some additional details that might get me started on the right path?   My gut reaction is that option 1 is likely the solution of choice.  The Splunk configuration "props + transforms.conf" part has me scratching my head a bit, though I think I got it from the rsyslog part onward. Thanks!    
Hello. I am interested in data that occurs from Tuesday night on 8 PM until 6 AM. The caveat is that I need 2 separate time periods to compare. One of which is the 2nd Tuesday of each month until the... See more...
Hello. I am interested in data that occurs from Tuesday night on 8 PM until 6 AM. The caveat is that I need 2 separate time periods to compare. One of which is the 2nd Tuesday of each month until the 3rd Thursday. The other is any other day in the month.  So far I have:  | eval day_of_week=strftime(_time, "%A") | eval week_of_month=strftime(_time, "%U" ) | eval day_of_month=strftime(_time, "%d") | eval start_target_period=if(day_of_week=="Tuesday" AND week_of_month>1 AND week_of_month<4, "true", "false") | eval end_target_period=if(day_of_week=="Thursday" AND week_of_month>2 AND week_of_monthr<4, "true", "false") | eval hour=strftime(_time, "%H") | eval time_bucket=case( (start_target_period="true" AND hour>="20") OR (end_target_period="true" AND hour<="06"), "Target Period", (hour>="20" OR hour<="06"), "Other Period" ) My issue is that my "week of month" field is reflecting the week of the year. Any help would be greatly appreciated.  EDIT: I placed this in the wrong location, all apologies. 
Hello, Recently we replaced our Syslog server from rsyslog to syslog-ng.  We are collecting the network device's log - every source logged its own <IPaddress.log> file. Universal forwarder pushing t... See more...
Hello, Recently we replaced our Syslog server from rsyslog to syslog-ng.  We are collecting the network device's log - every source logged its own <IPaddress.log> file. Universal forwarder pushing them to the indexer.  Inputs, outputs are ok the data flowing, sourcetype is standard syslog. Everything is working as expected... Except for some sources... I spotted this because the log volume has dropped since the migration. For those, I do not have all of the events in Splunk.  I can see the file on the syslog server, let's say there are 5 events per minute. The events are the same - for example, XY port is down - but not identical; the timestamp in the header and the timestamp in the event's message are different. (events are still the same length). So in the log file, there are 5 events/min, but in Splunk, I can see only one event per 5 minutes. The rest are missing... Splunk randomly picks ~10% of the events from the log file (all the extractions are ok for those, there is no special character or something in the "dropped" events...) I feel it is because of similar events - Splunk thinks they are duplicated - but other hand it cannot be, because they are different. Any advice? Should I try to add some crc salt or try to change the sourcetype? BR. Norbert  
That works great!  Just what I was looking for. Thanks much for your support, bowesmana!
@phanTom   ok so maybe there is a way to use IN operator on values in custom fields? For exmple custom field key is department, and i want to sort for values buisness, HR