All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

@sensitive-thug Since the .conf21 wrap up, I would love to watch some of the breakout sessions that I missed, or rewatch some.  While I do see lots online, why none of the FEA*, the non-Splunk topic ... See more...
@sensitive-thug Since the .conf21 wrap up, I would love to watch some of the breakout sessions that I missed, or rewatch some.  While I do see lots online, why none of the FEA*, the non-Splunk topic ones?  I really want to listen to FEA1966  again.  And share it with some coworkers.. but its not available.  Any thoughts?
Hey all, I am starting to work with dashboards and I have a table that I would like to display that has a bunch of data on it. Unfortunately it seems like in the Dashboard Studio there isn't a reall... See more...
Hey all, I am starting to work with dashboards and I have a table that I would like to display that has a bunch of data on it. Unfortunately it seems like in the Dashboard Studio there isn't a really clear cut way in changing the font size for the table. I've tried a number of different things within the JSON source, but nothing I am doing seems to manipulate the font size in the table.  Is there a way to do this? I would like to reduce the font size so some longer objects appear without forcing multiple carriage returns in the row.    Thanks in advance!
I have an instance of java application running on my local machine under the URL: http://localhost:8080. Since its local instance, the java application can be seen running in the CMD, and if I perfo... See more...
I have an instance of java application running on my local machine under the URL: http://localhost:8080. Since its local instance, the java application can be seen running in the CMD, and if I perform some functionality(ex. download a file), I can see the live logs in the CMD(ex. Starting download of file... Download Completed...). I want to know if Splunk can monitor this localhost URL, and I see these live logs in the splunk enterprise application(Or website monitoring app). I have tried website monitoring app, but splunk returns a 404 for localhost. Kindly help on this issue.
Greetings dear Splunk Community,   I'll try to keep it short and simple: I have a Query that gets multiple fields, but only 2 really matter for this question: eventName and eventResult. The is... See more...
Greetings dear Splunk Community,   I'll try to keep it short and simple: I have a Query that gets multiple fields, but only 2 really matter for this question: eventName and eventResult. The issue here is, the very first and last eventResult entries of a given eventName are different than all the other eventResult entries. so you can kind of imagine it looking like this: eventName eventResult A 1 A Data A Data A Data A 2 B 3 B Data B Data B 4 And I require the value of the first entry as an extra field next to the actual data for computational purposes for each individual eventName. There's over 100 different eventName possibilities that also change over time, so nothing hard coded is possible and also no lookup tables. Also, no joins, since a join would require way too much performance due to the size of these tables. so I'd like eventName eventResult additionalColumn A 1 1 A Data 1 A Data 1 A Data 1 A 2 1 B 3 3 B Data 3 B Data 3 B 4 3   Is this possible? I looked into mapping functions (to try and map the first eventResult to the eventName) but couldn't figure anything out that worked in a way that would make this possible. I cannot change anything about the data structure, nor did I develop it.  I'd be very appreciative of any ideas. I feel like I'm just missing something small in order to get it. Best regards, Cyd
Hi,   This is request you to kindly provide us the configuration for Splunk login UI SSO to authenticate with Google account for Kubernetes.
Hi experts, i have below table.. how do i change background colour of the row where error Categories = Total_error_rate Error_categories                       Percentage% error_rate_Error      ... See more...
Hi experts, i have below table.. how do i change background colour of the row where error Categories = Total_error_rate Error_categories                       Percentage% error_rate_Error                        0.1138498 error_rate_Warning                 0.0011737 error_rate_Critical                   0.0000000 error_rate_HTTP                       6.5950704 Total_error_rate                        6.7100939   Thank you
Hello Everyone, I am in situation where in I will send the results to one lookup file and from there again I need to take tail 2 two rows to display as a summary in my Dashboard. Below is the exact ... See more...
Hello Everyone, I am in situation where in I will send the results to one lookup file and from there again I need to take tail 2 two rows to display as a summary in my Dashboard. Below is the exact scenario.   I have a search which compares last week and this week data and produces the results something like below. Date Active  Inactive Deleted Added 10/25/2021 80 20 10 15   I need to send the results calculated in above search to one lookup file . Like that I will keep on sending  every week. It will be like below after some weeks say 3 weeks. Date Active  Inactive Deleted Added 10/25/2021 80 20 10 15 11/1/2021 78 22 8 11 11/8/2021 83 18 9 6   so above is the lookup file,  then I need to use the the created lookup as input in the same query to perform some calculations (i.e,. I need to take tail 2 and display it as summary of last 2 weeks). Tried something like below. But it didn't worked. Could someone help me on this. <search > | outputlookup  test1.csv | search inputlookup test1.csv | tail 2
Does Splunk SOAR operate in the cloud, or just on-premises?
This question is related my previous post. https://community.splunk.com/t5/Splunk-Search/XML-field-Extraction/m-p/571944#M199301 My source have a date which i'll be extracting using rex command. I ... See more...
This question is related my previous post. https://community.splunk.com/t5/Splunk-Search/XML-field-Extraction/m-p/571944#M199301 My source have a date which i'll be extracting using rex command. I want my table data to be shown on those respective dates. I have used xyseries, but i cannot add other fields to the table. source="weekly_report_20211025_160957*.xml"  |rex field=source "weekly_report_(?<Date>\w.*)\.xml"|.... | table suitename  name "Time taken(s)" status  | xyseries name Date status My final table should contain suitename , name, "Time taken(s)", status(under the Date filed). Is there any method to append all these table fields after applying xyseries?
Hello I use a dropdown list in my dashboard like this   <input type="dropdown" token="web_domain" searchWhenChanged="true"><choice value="*www.colis.fr*">Colis</choice>   And I retrieve the toke... See more...
Hello I use a dropdown list in my dashboard like this   <input type="dropdown" token="web_domain" searchWhenChanged="true"><choice value="*www.colis.fr*">Colis</choice>   And I retrieve the token in my title panel like this   <panel> <title>Application $web_domain$ - Evolution moyenne des appels</title>   Instead the $web_domain$ , I would like to retrieve the generic name of the $web_domain$, it means that instead displaying "www.colis.fr" I would like to retrieve onlis "Colis" How to do this please?  
Hi at all, my customer has the requirement to have the "index" field in each DataModel used in ES. Obviously, this additional field doesn't modify CIM compliance but it's needed to make an addition... See more...
Hi at all, my customer has the requirement to have the "index" field in each DataModel used in ES. Obviously, this additional field doesn't modify CIM compliance but it's needed to make an additional filter to data. But the question is: at the next upgrade of ES, the customization will be maintained or not? Bye. Giuseppe
Hi, I have configured Splunk heavy forwarder in 2 machines. I want to send logs from one machine to another and expect the receiver to store all the received logs in an index called "receivedlogs". ... See more...
Hi, I have configured Splunk heavy forwarder in 2 machines. I want to send logs from one machine to another and expect the receiver to store all the received logs in an index called "receivedlogs".   This is the video I followed to configure Splunk: https://www.youtube.com/watch?v=S4ekkH5mv3E&t=454s&ab_channel=Splunk%26MachineLearning Thank you.
Good day Team, I have a application which contains 5 servers. Each server is having different path. But the end is to read error.log and wrapper.log /log/apple/production/A1/error.log /log/ball/pr... See more...
Good day Team, I have a application which contains 5 servers. Each server is having different path. But the end is to read error.log and wrapper.log /log/apple/production/A1/error.log /log/ball/production/A2/error.log .. Here I can use regex like this in monitor stanza -- /log/*/prodcution/*/error.log But the problem is each server is having many folders for that *. I dont want all folders. Need only few.  Say the first star. I want only apple or ball or cat. If it is any other name in any server I can ignore Similarly take the second star. I want only A1 or A2  or A3. I can ignore B1 or C1 or so.    So is it possible to write like that using any regex either in inputs itself or using props?
Hi , does anyone have any experience with Parsing Version 6 schema of Umbrella logs the release notes from the addon https://splunkbase.splunk.com/app/3926/ talks only of version5 1.0.5: Adds suppo... See more...
Hi , does anyone have any experience with Parsing Version 6 schema of Umbrella logs the release notes from the addon https://splunkbase.splunk.com/app/3926/ talks only of version5 1.0.5: Adds support for logging format version 5 + Firewall Logs   the change in Umbrella seems for my environment to be only from Version4 -> version6 and "Schema upgrades are one way; you will not be able to revert this upgrade." Its scary you cant revert   Anyone moved to version6 and did they make changes in the local/{props,transforms} ?  
| datamodel "Change_Analysis" "Account_Management" search | where 'All_Changes.tag'="delete" AND 'All_Changes.user'!="*$*" | stats values(All_Changes.result) as "signature",values(All_Changes.src) as... See more...
| datamodel "Change_Analysis" "Account_Management" search | where 'All_Changes.tag'="delete" AND 'All_Changes.user'!="*$*" | stats values(All_Changes.result) as "signature",values(All_Changes.src) as "src",values(All_Changes.dest) as "dest", values(All_Changes.user) as "users", DC(All_Changes.user) as user_count by "All_Changes.Account_Management.src_user" | rename "All_Changes.Account_Management.src_user" as "src_user","All_Changes.user" as "user"   I am using this query to monitor  for Account Deleted-  But all the time I am getting this alert triggered for the computer account ending with $ symbol  Ex:  XYZLAPTOP$ , ABCLAPTOP$  etc I have added the search  where 'All_Changes.tag'="delete" AND 'All_Changes.user'!="*$*"" How can I exclude this $ symbol account from the report? Can any one please help      
Hello all,   I am trying to extract a field from the below event and the extraction is working fine on events that is coming with the value for the field. However, in the events that are coming in ... See more...
Hello all,   I am trying to extract a field from the below event and the extraction is working fine on events that is coming with the value for the field. However, in the events that are coming in empty values it is picking the next matching value. How to fix it so it only picks the required value and ignore the empty field. Expression used: (?:[^,]+,){23}\"(?<occurance>\w+)\",.*   Below highlighted is the event that is extracting correct: 50271232,00004102,00000000,1600,"20210901225500","20210901225500",4,-1,-1,"SYSTEM","","System",46769357,"System","Server-I \x83W\x83\x87\x83u\x83l\x83b\x83g(AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX:@5V689)\x82\xF0\x8AJ\x8En\x82\xB5\x82܂\xB7","Information","admin","/App/Sys/AJS2","JOBNET","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","JOBNET","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","START","20210901225500","","",11,"A0","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds","A1","04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","A3"   The below event does not have the value in the field and the next matching field is picked from below. 50266209,00000501,00000000,3476,"20210901220311","20210901220311",4,-1,-1,"SYSTEM","","psd005",142331,"MS932","OR01201S [psd005:HONDB1] YSN1 free 4.52% \x82\xAA\x82\xB5\x82\xAB\x82\xA2\x92l5%\x82\xF0\x89\xBA\x89\xF1\x82\xE8\x82܂\xB5\x82\xBD (Free size = 1466560KB) [Jp1 Notified]","Alert","","/insight/PI","","","","","","","","","",9,"ACTION_VERSION","510","OPT_CATEGORY","OS","OPT_PARM1","","OPT_PARM2","","OPT_PARM3","","OPT_PARM4","","OPT_SID","HONDB1","OPT_URL1","","OPT_URL2","",   Please help in this.
Hi, We are using Splunk cloud 8.2 and mainly utilizing for Splunk SIEM solution.  Currently we have many scheduled alerts, searches and reports. In the recent days we could see 21% of the searche... See more...
Hi, We are using Splunk cloud 8.2 and mainly utilizing for Splunk SIEM solution.  Currently we have many scheduled alerts, searches and reports. In the recent days we could see 21% of the searches were skipped and job execution time also increased.  From yesterday, we are unable to see output results for any of the jobs, but we are getting the search result when we execute adhoc search. We are also able to see below Errors and warnings in our console. The percentage of non high priority searches skipped (74%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=7056. Total skipped Searches=5271 The instance is approaching the maximum number of historical searches that can be run concurrently. The number of extremely lagged searches (1) over the last hour exceeded the red threshold (1) on this Splunk instance Could you please share some solution to implement in this case.     
I have multiple concurrent saved searches(around 6). All searches have outputlookup command which is writing to separate kvstore. Searches are taking too much time to execute the outputlookup command... See more...
I have multiple concurrent saved searches(around 6). All searches have outputlookup command which is writing to separate kvstore. Searches are taking too much time to execute the outputlookup command. It is working fine if outputlookup is removed. Any suggestion on this? I know there is limit on number of rows to written for the outputlookup command. But, as all searches are within that limit wondering if there is limit on number of concurrent outputlookup command. Is there any such thing? Is it like one search outputlookup will wait for other output lookup to complete? If so, any solution for that?