All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What do you mean it's showing null values - your mvmap statement looks like it's doing what you want it to do, i.e. making sure that it only takes data with at least 1 character. Can you demonstrate... See more...
What do you mean it's showing null values - your mvmap statement looks like it's doing what you want it to do, i.e. making sure that it only takes data with at least 1 character. Can you demonstrate the issue as the mvmap statement works, i.e. this example shows that it will remove the empty middle element | makeresults | fields - _time | eval ImpConReqID=mvappend("a","","b") | eval ImpCon=mvmap(ImpConReqID,if(match(ImpConReqID,".+"),"ImpConReqID: ".ImpConReqID, null())) | eval base_elements=mvcount(ImpConReqID) | eval reduced_elements=mvcount(ImpCon) What is the relevance of the 2nd two lines of your example to your question?  
We have the same problem here. The “Performance Monitor Users” group does not exist on a domain controller. Accordingly, the domain account for the forwarder cannot be added.
Hi @gcusello    I used your solution and it worked. I now only have to fix the bytes as they don't show up, but I will try to solve  it myself :D. Thanks! 
Referring to previous question (Solved: How to insert hyperlink to the values of a column ... - Splunk Community) how can I add 2 different URLs for 2 different columns in the table such that, the re... See more...
Referring to previous question (Solved: How to insert hyperlink to the values of a column ... - Splunk Community) how can I add 2 different URLs for 2 different columns in the table such that, the respective hyperlink opens only when the value in the respective column is clicked.             "eventHandlers": [                 {                     "type": "drilldown.customUrl",                     "options": {                         "url": "$row.firstLink.value$",                         "newTab": true                     }                 },                 {                     "type": "drilldown.customUrl",                     "options": {                         "url": "$row.secondLink.value$",                         "newTab": true                     }                 }             ]
Hi @vstan , check if in all events you have the User field (fields are case sensitive!), if not add in the coalesce command all the fields containing the User values to use as correlation key. Then... See more...
Hi @vstan , check if in all events you have the User field (fields are case sensitive!), if not add in the coalesce command all the fields containing the User values to use as correlation key. Then check the exact field name of TOTAL_ATTACHMENT_SIZE_SEGMENT and EMAIL_ADDRESS. Ciao. Giuseppe P.S.: Karma Points are appreciated
Hello @karthi2809 , I do not understand the use of mvmap command here. Generally, mvmap command is used to perform some iterative operations on the multivalue field. Your SPL currently interpretes a... See more...
Hello @karthi2809 , I do not understand the use of mvmap command here. Generally, mvmap command is used to perform some iterative operations on the multivalue field. Your SPL currently interpretes as you're trying to map ImpConReqId field with  following string: "ImpConReqId: <<value of ImpConReqId>>". And if the "if condition" fails, the value gets updated to null() and then ImpConReqId gets mapped with null() value. I would suggest you to first filter out the null values using isnull() or isnotnull() functions and then perform multi value operations. Also, if you can share the full SPL query, it would be helpful to assist you better.   Thanks, Tejas.
Hi @Siddharthnegi , yes, when you save the panel (creating the dashboard), you can setup a starting zoom level and default starting cohordinates that you save with the dashboard. Ciao. Giuseppe
Hi @norbertt911, this isn't a Splunk question, but a Linux question. Anyway, we had a similar issue with rsyslog and we soved changing the default template: in rsysog, for each rule, you have dyna... See more...
Hi @norbertt911, this isn't a Splunk question, but a Linux question. Anyway, we had a similar issue with rsyslog and we soved changing the default template: in rsysog, for each rule, you have dynafile (in which you insert the template addressing the file to write) and template (by default "rsyslog-fmt", that you use to give a format to your output). Ciao. Giuseppe
what do you mean by predefined by you?
Hi @Siddharthnegi , I don't think that's possible: the zoom level in a map is predefined by you when you created the panel. You can only manually modify it, using the buttons in the map or your mou... See more...
Hi @Siddharthnegi , I don't think that's possible: the zoom level in a map is predefined by you when you created the panel. You can only manually modify it, using the buttons in the map or your mouse. Ciao. Giuseppe
hello , I have a dashboard in which there are many panels and in each panels I am using geostats command to show the results of the search of  that particular panel in world map. I want to add zoom f... See more...
hello , I have a dashboard in which there are many panels and in each panels I am using geostats command to show the results of the search of  that particular panel in world map. I want to add zoom feature in it.  Let me explain So lets say I am on panel 1 and i have zoom on America  to see in which area are the results showing just like this. Now what I want is that if I switch to different panel it should also be zoomed in from America. Is that possible.
Hi All, I want to filter out null values.In my field the ImpCon having null values.Now i want to filter the values which i dont want to show in the table.I am trying below query .which is showing t... See more...
Hi All, I want to filter out null values.In my field the ImpCon having null values.Now i want to filter the values which i dont want to show in the table.I am trying below query .which is showing the null values. | eval ImpCon=mvmap(ImpConReqID,if(match(ImpConReqID,".+"),"ImpConReqID: ".ImpConReqID,null())) | eval orcaleid=mvfilter(isnotnull(oracle)) | eval OracleResponse=mvjoin(orcaleid," ")
Update in case anyone tried testing to see if "append" option exists, the "append" option does actually save but appears to not work.  
Specifically speaking the dataSources section discussed here: https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/dashDef#The_dataSources_section   Hypothetically, I have two tables, eac... See more...
Specifically speaking the dataSources section discussed here: https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/dashDef#The_dataSources_section   Hypothetically, I have two tables, each stored in individual data source stanzas: Table 1 = ds.search stanza 1 Table 2 = ds.search stanza 2 The goal is to append the tables together, and then use the "stats join" method to merge the two tables together. If possible, this merge could be done as a ds.chain type stanza with two extend options, but it does not appear to be allowed. Here's the documentation for Data source options. https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/dsOpt  The document seems to be missing options like "extend", so I'm hoping someone knows if there's any additional options that is hidden. Now, I am trying to avoid using the [] subsearches because of 50,000 row limit, so the following append command will not be desired: <base search> | append [search ....] Anyone with mastery of JSON hacks might know if appending two data sources stanzas together be possible? Thank you.
hi , i have installed Splunk on my Ubuntu desktop. i have logged in once . however during second time log in it said unable to connect  
Dang it I don’t know how I missed that. Thank you.
You can also try this link:   http://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements
Hi, Yes. Use sourcetype = app_alert_data in the input stanza combined with a props configuration similar to what I shared. The props stanza uses a different sourcetype setting to specifically set th... See more...
Hi, Yes. Use sourcetype = app_alert_data in the input stanza combined with a props configuration similar to what I shared. The props stanza uses a different sourcetype setting to specifically set the following values: invalid_cause = archive is_valid = Fals The combination of the two sourcetype stanzas gives you both preprocessing by the archive processor and parsing of the processed data by the app_alert_data stanza.
What is the relationship between ID and Event, because you don't appear to be doing anything with ID in you  current search. Does Event exist in your second dataset (ERROR API [ID]) Assuming that yo... See more...
What is the relationship between ID and Event, because you don't appear to be doing anything with ID in you  current search. Does Event exist in your second dataset (ERROR API [ID]) Assuming that your purpose of trying to join in the first place on ID is because you don't have Event in the second dataset and ID has a 1:1 relationship with Event, then try this index=abcd ("API : access : * : process : Payload:") OR ("API" AND ("Couldn't save")) ``` You could combine this rex into a single statement to extract ID for both INFO and ERROR cases if you can make the regex ``` | rex "\[INFO \] \[.+\] \[(?<InfoID>.+)\] \:" | rex "\[ERROR\] \[API\] \[(?<ErrorID>.+)\] \:" ``` Assume this will ONLY occur in Info events, so will be null for ERROR ``` | rex " access : (?<Event>.+) : process" ``` Get the common ID ``` | eval ID=coalesce(InfoID, ErrorID) ``` t is a base event, forms Total ``` | eval t=if(isnotnull(InfoID), 1, 0) ``` Summing 't' gives total and count of unique ErrorID gives failed ``` | stats sum(t) as Total dc(ErrorID) as Failed values(Event) as Event by ID | eval Success=Total-Failed This uses the unique count of ErrorID to determine failures, which is effectively the dedup of the error Id, but it is assuming that one ID is one Event so at the end with the values(Event) it will have the Event extracted from the Info events and join on the common ID. Hope this helps and always use the principle that join in Splunk is never a good way to go.
%U is week of year https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Commontimeformatvariables#Specifying_days_and_weeks You can easily do the math to work out which week of month ... See more...
%U is week of year https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Commontimeformatvariables#Specifying_days_and_weeks You can easily do the math to work out which week of month it is based on your start day of the week. See this example which calculates the week number with either a start of week being Sunday or Monday. | makeresults count=31 | streamstats c | eval _time=strptime(printf("2024-03-%02d", c), "%F") | fields - c | eval day_of_week=strftime(_time, "%A") | eval day_of_month=strftime(_time, "%d") | eval wday_sunday_start=strftime(_time, "%w"), wday_monday_start=if(wday_sunday_start=0,7,wday_sunday_start) | eval week_of_month_sunday_start=ceil(max((day_of_month-wday_sunday_start), 0) / 7) + 1 | eval week_of_month_monday_start=ceil(max((day_of_month-wday_monday_start), 0) / 7) + 1