All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Gurus I am working on a Studio Dash and I would like to add the output of a transaction the way it is usually shown in the search gui for debugging purposes so I can easily see if the transaction i... See more...
Gurus I am working on a Studio Dash and I would like to add the output of a transaction the way it is usually shown in the search gui for debugging purposes so I can easily see if the transaction is correct. Turns out the only option I seem to have is a table but here I only get the raw msg. That's ugly and unreadable, of course, since the newlines are merged into one.  Is there a way to do this within a dashboard and make the message look just like in the search gui ? Perhaps I could re-insert the newlines ?   Thx
I'm curious what the best way to test if a directory exists on a server (Windows/ NIX*) and if it exists have the deployment server push the appropriate app out to the given server to pick up the log... See more...
I'm curious what the best way to test if a directory exists on a server (Windows/ NIX*) and if it exists have the deployment server push the appropriate app out to the given server to pick up the logs. I've been told that it's best not to just push out all apps to all servers, so I'm trying to more selective. At the moment we run a script (bash, powershell) on the local server with Splunk and then create custom inputs.conf files to have them send the logs we need. However, this prevents the deployment server from managing those apps. I'm curious if there's a better way to do this? So we can manage the apps through the deployment server and don't have these one off scenarios that we have to document, so others know about them.
Not sure I am missing something, but the Correlation Searches provided by ESCU are not consistent in their results. Some result is the user being indentified as in a field user_id, some in a field Us... See more...
Not sure I am missing something, but the Correlation Searches provided by ESCU are not consistent in their results. Some result is the user being indentified as in a field user_id, some in a field UserID This is inconsistent (which I could live with), but does not match up to the fields used (by default) to identify users within Enterprise Security - Incident Review. So I need to add them to the "Incident Review - Event Attributes".  In addition, if I am using Data Enrichment, then I also need to add to "Incident Review - Event Attributes" fields like UserID_email, UserID_bunit, UserID_category, etc.... If the ESCU could have their correlation search return a more "standard" set of fields as results, then it would make things work more "out of the box"   I appreciate that I might have missed something obvious, I and I hope I have - I value all replies
Below is the sample input for my search   BusinessIdentifier : 09 ***** MessageIdentifier : 3308b7dd-826c-4e98-8511-6a018c5f8bcc ***** TimeStamp : 2022-03-16T11:08:30.013Z ***** ElapsedTime : 0.2... See more...
Below is the sample input for my search   BusinessIdentifier : 09 ***** MessageIdentifier : 3308b7dd-826c-4e98-8511-6a018c5f8bcc ***** TimeStamp : 2022-03-16T11:08:30.013Z ***** ElapsedTime : 0.25 ***** InterfaceName : NLTOnline ***** ServiceLayerName : OSB ***** ServiceLayerOperation : CreateQPBillingEvents ***** ServiceLayerPipeline : requestPipeline ***** SiteID : ***** DomainName : ***** ServerName : DEVserver ***** FusionErrorCode : ***** FusionErrorMessage : ***** <Body xmlns="http://schemas.xmlsoap.org/soap/envelope/"><com:createQPBillEvents xmlns:com="com.alcatel.lucent.on.ws.manager"> <com:ACTION_DATE>2021-08-30T23:59:59+08:00</com:ACTION_DATE> <com:ADR_BLDG_TYPE>HDB</com:ADR_BLDG_TYPE>   =============   I need to extract the values of the below    ElapsedTime : 0.25   InterfaceName : NLTOnline ServiceLayerName : OSB   ServiceLayerOperation : CreateQPBillingEvents  ServiceLayerPipeline : requestPipeline  Using xmlkv its not working. can someone help to provide the right command?
Dear Splunkers, I want to add a drill down link to my dashboard that redirects to a remote website. Currently, I do it with the following URL using the <link> tab inside drill down.     <... See more...
Dear Splunkers, I want to add a drill down link to my dashboard that redirects to a remote website. Currently, I do it with the following URL using the <link> tab inside drill down.     <link>http[:]//website.com/param1=xyz</link>   The problem is when the user clicks on the link the param1=xyz is part of the URL and is visible in the browser. Does drilldown support HTTP POST so that I can hide the param1=xyz from being displayed in the browser? Regards.
Hi can anyone think of a way to get Splunk versions reported from universal forwarders when in a Intermediate forwarder environment. I have tried searches like  index=_internal sourcetype=splunkd... See more...
Hi can anyone think of a way to get Splunk versions reported from universal forwarders when in a Intermediate forwarder environment. I have tried searches like  index=_internal sourcetype=splunkd group=tcpin_connections but it only returns the agent version of the intermediate layer, not the UF versions behind it. Are there any commands that can be deployed via to each UF to collect that information?
I have scheduled a Splunk report and set the search Time frame as Previous Week. The report I am getting is for Sunday to Saturday results. But I want the search to happen from Monday to Sunday res... See more...
I have scheduled a Splunk report and set the search Time frame as Previous Week. The report I am getting is for Sunday to Saturday results. But I want the search to happen from Monday to Sunday results (Previous week). Please help here.
I have a search that counts  the vulnerabilities for a given team and places them on a Bar chart on a dashboard based on the "Risk" field to display how many Critical, High, medium or low events. P... See more...
I have a search that counts  the vulnerabilities for a given team and places them on a Bar chart on a dashboard based on the "Risk" field to display how many Critical, High, medium or low events. Problem I have is that not all teams have all 4 levels of vulnerabilities so the graphs look a bit rubbish. Some only have one level, others have 3 or 4 and the graphs only show the vulnerabilities that have a value I would like to always have Critical, High, Medium AND Low on the x-axis for every team even though the value for these may be Zero. For example, if a team has 5 Mediums, the graph only shows one bar. How to I create a Bar chart that shows: Critical =0 High=0 Medium =5 Low=0 Thanks
I'm attempting to renew the certificate on a splunk proxy server.  According to our workflow after the certificate has been renewed I need to restart the nginx service.   My question is will restart... See more...
I'm attempting to renew the certificate on a splunk proxy server.  According to our workflow after the certificate has been renewed I need to restart the nginx service.   My question is will restarting the nginx service on the Splunk proxy server end any sessions in-between the user and the POD or do they have some persistence through a restart.  Even if there is only a brief outage it needs to be known for our documentation. thanks! 
I've been comparing two lookup files, but its more pure arithmetic, where I am trying to implement a true comparison where it looks to match values, and provide a percentage based on which matches we... See more...
I've been comparing two lookup files, but its more pure arithmetic, where I am trying to implement a true comparison where it looks to match values, and provide a percentage based on which matches were true. So if I have three values in file1, and only two values match in file2, the percentage would equal to 66.6%.   | inputlookup file1 | eventstats dc(Serial Number) as file1_Count | dedup file1_Count | inputlookup append=T file2 | eventstats dc(System Serial Number) as file2_Count | dedup file2 | fields file1_Count,file2_Count | eval percentage=round('file2_Count'/'file1_Count'*100,2)."%"  
Hello everyone, I'm trying to schedule an alert looking like this: index=network host=device1 | stats count by sourceip | where count > 2 (last 7 days). I will schedule it daily and I want it to ... See more...
Hello everyone, I'm trying to schedule an alert looking like this: index=network host=device1 | stats count by sourceip | where count > 2 (last 7 days). I will schedule it daily and I want it to search the last 7 days to see if an  IP is found more than 2 times and return events like the below:             sourceip         count 1   162.14.xxx.xxx       5 2   185.225.xxx.xxx    7 3   203.122.xxx.xxx    3 4   61.246.xxx.xxx       6 The problem is that the next day I don't want to see the same results if there is no new data from a new IP from the last 24h. So I need to add a condition that will only allow the search to return results if a new returned result (where count > 2) is added to the results last 24h. Do you have any suggestions? Thank you in advance.
Hi, a question from a high level of what goes on behind the scenes. I have an internal user who has written lots of handy macros that get chained together. The dashboards leveraging the macros use ... See more...
Hi, a question from a high level of what goes on behind the scenes. I have an internal user who has written lots of handy macros that get chained together. The dashboards leveraging the macros use a base query with panels that continue processing the base query result set. This user is hitting disk quota usage limits that other internal users do not hit. The macros perform a series of joins and appends along the way with 4 joins not being unusual. I'm wondering if the joins perhaps create multiple copies of the left join for each of any join along the way, requiring more disk space during processing stages even if the end result is "small".  The usage reported in the search does not match the sum total of the usage in the job inspection page so we are not sure what is consuming the space.  I just ran one example query of the chained macros, broken out to its query form in ad hoc search, and the end result was only 64k events that are small in size (less than 50 characters). So I guess my question(s) is: 1. Do joins require a lot of disk space usage from the user's quota? 2. Any tips on how to debug end user issues with disk quota usage?  
TL;DR: How would you approach adding multi-tenancy to SSE? Hi there, I am looking to use the Splunk Security Essentials (SSE) app on a search head (SH) that is peered with a bunch of other SHs ... See more...
TL;DR: How would you approach adding multi-tenancy to SSE? Hi there, I am looking to use the Splunk Security Essentials (SSE) app on a search head (SH) that is peered with a bunch of other SHs that have their own data. The app works fine, but it throws all the data it can find onto one pile and does its thing. What I'd like is to be able to set a SSE-wide extra query constraint (splunk_server=whatever) so that it would only look at data from that peered SH. This applies both to the original introspection, as well as the subsequent reports, and mapping to the MITRE framework. Best case scenario, I can add a drop-down to select the peer and now the app would work with data from that peer. Alternatively, I guess I could deploy a modified app for each peer that is then configured to look at that data only. I'm relatively new to Splunk ( hi :wave: ) but not so new to development, so I'm happy to roll up my sleeves. I was hoping that perhaps somebody with a good understanding of the app (there's a lot going on) could give me some pointers on the best way to tackle this. thanks in advance for your input, much appreciated  : ) joost
Let's say, we have 3 different events ( 2 with Failure messages and 1 with reconfigured message) based on the service name and timestamp. Event 1 :  2022-07-25 08:29:38.516 service_name=addtocart... See more...
Let's say, we have 3 different events ( 2 with Failure messages and 1 with reconfigured message) based on the service name and timestamp. Event 1 :  2022-07-25 08:29:38.516 service_name=addtocart message=failure Event 2: 2022-07-25 08:29:35.516 service_name=addtocart message=reconfigured Event 3: 2022-07-25 08:29:30.516 service_name=addtocart message=failure Output should be as : Which service is failed and not reconfigured again based on the latest timestamp _time service_name message 2022-07-25 08:29:38.516 addtocart failure
Experts, I have below XML Dashboard code. The first panel displays calender heat map using d3 js library as shown below. My requirement is, when a day on the calender is clicked, that date v... See more...
Experts, I have below XML Dashboard code. The first panel displays calender heat map using d3 js library as shown below. My requirement is, when a day on the calender is clicked, that date value needs to be passed as token to the second panel.  I could not get this working. Could you please help me ? <form script="autodiscover.js" hideChrome="true" > <label>Monthly Utilization</label> <row> <panel> <title>Rolling Average</title> <html> <div id="search1" class="splunk-manager" align="center" data-require="splunkjs/mvc/searchmanager" data-options='{ "search": { "type": "token_safe", "value": "source=...........| timechart span=1d values(AverageVAL) as \"Average VAL\""}, "cancelOnUnload": true, "preview": true }'> </div> <div id="heat_map" class="splunk-view" align="center" data-require="app/MFDashboard/calendarheatmap/calendarheatmap2" data-options='{ "id" : "fcal", "managerid" : "search1", "domain" : "month", "subDomain" : "day" }'> </div> </html> <drilldown> <set token="Selectedday">$click.value$</set> </drilldown> </panel> </row> <row> <panel depends="$Selectedday$"> <html> <p> Testing... </p> </html> </panel> </row> </form>
I have a field called RenderedMessage in event log which has the following text Task finished:  TaskID 1 for branch 6000 I have been given the task to alert in an email all the branches that has th... See more...
I have a field called RenderedMessage in event log which has the following text Task finished:  TaskID 1 for branch 6000 I have been given the task to alert in an email all the branches that has the tasked finished. In my search, I am able to get the events for this task as index=prod | spath RenderedMessage | search RenderedMessage="*Task finished: ColleagueNextWeekTask*" How shall I extract only the branch values from this events/message? I need only the 6000 from this. Thank you.
Hello, I'm troubleshooting a possible problems with the dbconnect app. We setup a dbinput that indexing with 90 second frequency, raising column by time. We have to index these data fairly frequent... See more...
Hello, I'm troubleshooting a possible problems with the dbconnect app. We setup a dbinput that indexing with 90 second frequency, raising column by time. We have to index these data fairly frequently for monitoring. At around 4PM, monitoring team told me that data has stop indexing, I check indexing log and found no bug The log said input_mode=tail events=0, and repeat in 30 minutes until we have normal index log with rising column check point again. I checked with SQL and the we do have data in between those time.So I want to pinpoint the root problems so we don't encounter this again. Is this the problems with networking, oracleBD, or is this from splunk (I highly doubt it because I don't do anything and it continue indexing 30 min after)
| rex "^(?\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s*\[(?[^\]]*)\]\s*\[(?[^\]]*)\]\s*\[(?[^\]]*)\]\s*[^\[]+\s\[(?[^\]]+)" | search Log_level="ERROR" | where Process != "" | stats count AS ERRORS by Pr... See more...
| rex "^(?\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s*\[(?[^\]]*)\]\s*\[(?[^\]]*)\]\s*\[(?[^\]]*)\]\s*[^\[]+\s\[(?[^\]]+)" | search Log_level="ERROR" | where Process != "" | stats count AS ERRORS by Process | sort - count asc     i have above query to help get ERROR count of our processes, but I want to get the daily average of the number of errors generated by each process between a certain time interval.. lets say from 6am to 6pm from monday to friday, How can I achieve this
I have a dashboard that only for some users (seems to be some new ones or long returning ones), is returning an "Action Forbidden." error message on panels. I have checked access permissions, but the... See more...
I have a dashboard that only for some users (seems to be some new ones or long returning ones), is returning an "Action Forbidden." error message on panels. I have checked access permissions, but there are no differences to other users who are not receiving this error. I have also checked the enterprise docs, but can't find reference to this error message. Dashboard Panel error message below. Any help would be appreciated
Data Model (simplified): - numeric value "Hours" - numeric value "StartTime" (assumed to always have time be 00:00:00) in UnixTime - numeric value "EndTime" (same assumptionm as above) in UnixT... See more...
Data Model (simplified): - numeric value "Hours" - numeric value "StartTime" (assumed to always have time be 00:00:00) in UnixTime - numeric value "EndTime" (same assumptionm as above) in UnixTime - calculated from the above two: time period as UnixTime value - calculated: "Hours" per day - string value (cathegorical) "Group"   Goal: get a List of Days where each day contains: - the respective date - the "Hours per Day" value assigned to a field named after the Group   Intention: create a vizualisation showing what group is needed how much at what time