All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All i am using the below query and it works fine. i.e how many emails were triggered to a Distribution list in a Month. sourcetype="ms:o365:reporting:messagetrace"   SenderAddress=*** Recipien... See more...
Hi All i am using the below query and it works fine. i.e how many emails were triggered to a Distribution list in a Month. sourcetype="ms:o365:reporting:messagetrace"   SenderAddress=*** RecipientAddress=*dl1@contoso.com* Status IN (*) subject="***" MessageId=*** | timechart span=1mon count I have the below requirement please guide me with query. How many email triggered to the DL dl1@contoso.com on a day and subject of that email and sender address and i want to schedule this report to the user user1@contoso.com on daily basis.
I need to round the max(Delay) and avg(Delay) to 3 decimals in the following command: my search | timechart span=5m avg(Delay) max(Delay) by host Thanks
Hi everyone, I use dbxquery and get this result from database: id count 123 12 456 24 478 6   Also I have a csv file already put  in lookup of Splunk lik... See more...
Hi everyone, I use dbxquery and get this result from database: id count 123 12 456 24 478 6   Also I have a csv file already put  in lookup of Splunk like this: id type 123 Machine 478 Machine 456 Food 987 Food 789 Toys   Please, how can I insert the column "type" from lookup to the search result above? Basically this is what I want to achieve: id count type 123 12 Machine 478 6 Machine 456 24 Food 987 0 Food 789 0 Toys I tried: |lookup lookupfile.csv id OUTPUT id type but it doesn't work Thanks, Julia
Our firewall logs shows twice on splunk. I configured rsyslog server with tcp. When I configure the log server with udp . Everythink  is okey. But tcp is problem. When I configured the log server 105... See more...
Our firewall logs shows twice on splunk. I configured rsyslog server with tcp. When I configure the log server with udp . Everythink  is okey. But tcp is problem. When I configured the log server 10514 tcp every duplicate.
Hi Team, We want to know the number of available agents, used and unused agents  and the available license.  Could you please help how to find that information. Thanks&Regards Srinivas
Hi Team, I have the event in the below format and want to extract the key-value pairs as fields. Please help extract fields from LogDate till the user.Thanks     { [-] event: INFO 2022-0... See more...
Hi Team, I have the event in the below format and want to extract the key-value pairs as fields. Please help extract fields from LogDate till the user.Thanks     { [-] event: INFO 2022-09-23 11:49:59,033 [[MuleRuntime].uber.01: [papi-ust-email-notification-v1-uw-qa].get:\ping:Router.CPU_LITE @6c1fb7] org.mule.runtime.core.internal.processor.LoggerMessageProcessor: { "LogDate": "09/23/2022 16:11:13.932", "LogNo": "99", "LogLevel": "INFO", "LogType": "Process Level", "LogMessage": "Splunk anypoint log", "TimeTaken": "0:00:12.628", "ProcessName": "AnypointSplunkTest", "TaskName": "AnypointTest", "RPAEnvironment": "DEV", "LogId": "002308900.20250824210419999", "MachineName": "abc-xyz-efg", "User": "name.first" } metaData: { [+] } }       and this is the raw text  {"metaData":{"sourceApiVersion":"1.0.0-SNAPSHOT","index":"aas","sourceApi":"papi-cust-email-notification-v1-uw-qa","cloudhubEnvironment":"AUTOMATION-QA","tags":""},"event":"INFO 2022-09-23 11:49:59,033 [[MuleRuntime].uber.01: [papi-cust-email-notification-v1-uw2-qa].get:\\ping:Router.CPU_LITE @6f3b7] org.mule.runtime.core.internal.processor.LoggerMessageProcessor: {\n \"LogDate\": \"09/23/2022 16:11:13.932\",\n \"LogNo\": \"99\",\n \"LogLevel\": \"INFO\",\n \"LogType\": \"Process Level\",\n \"LogMessage\": \"Splunk anypoint log\",\n \"TimeTaken\": \"0:00:12.628\",\n \"ProcessName\": \"AnypointSplunkTest\",\n \"TaskName\": \"AnypointTest\",\n \"RPAEnvironment\": \"DEV\",\n \"LogId\": \"002308900.20250824210419999\",\n \"MachineName\": \"abc-xyz-wd\",\n \"User\": \"name.first\"\n}"}
Hello Team, I am trying to migrate from classic to dashboard studio and facing issue with setting token. In classic I define it as a single value panel and it comes as a subscript <set token="m... See more...
Hello Team, I am trying to migrate from classic to dashboard studio and facing issue with setting token. In classic I define it as a single value panel and it comes as a subscript <set token="money_avg">$result.money_avg$</set> and  <option name="underLabel">+/- $money_avg$</option>.   In Studio I am unable to get this option in single value panel. (highlighted yellow).   Is there any workaround ?
Hi everyone,   I am attempting to implement some logic in my alert searches but I can't seem to figure out how to do it.   I have some event data coming into Splunk that I want to trigger a S... See more...
Hi everyone,   I am attempting to implement some logic in my alert searches but I can't seem to figure out how to do it.   I have some event data coming into Splunk that I want to trigger a Service Now incident creation using a priority value based on the event severity and the host environment (test, stage, prod, DR).   I am using a case statement to assign a severity ID depending on the alert severity: | eval severity_id=case(Severity=="critical", 6, Severity=="major", 5, 1==1, 3)   If I want to add a second condition to check the value of the hostEnvironment field before setting the severity ID what would be the best way to do this? E.G. If the severity = "critical" AND hostEnvironment = test then severity ID = 3. E.G. If the severity = "critical" AND hostEnvironment = prod then severity ID = 6 etc.  I am hoping there is a way to nest the comparison functions.    Thanks in advance.  
Our firewall logs shows twice on splunk. I configured rsyslog server with tcp. When I configure the log server with udp . Everythink  is okey. But tcp is problem. When I configured the log server 105... See more...
Our firewall logs shows twice on splunk. I configured rsyslog server with tcp. When I configure the log server with udp . Everythink  is okey. But tcp is problem. When I configured the log server 10514 tcp every duplicate. 
Hello, We have noticed the following errors comming from our Search Heads from Splunk_TA_jmx: ERROR ExecProcessor [57556 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/ap... See more...
Hello, We have noticed the following errors comming from our Search Heads from Splunk_TA_jmx: ERROR ExecProcessor [57556 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py" File "/opt/splunk/etc/apps/Splunk_TA_jmx/lib/solnlib/conf_manager.py", line 459, in get_conf ERROR ExecProcessor [57556 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py" WARNING:root:Run function: get_conf failed: Traceback (most recent call last): ERROR ExecProcessor [57556 ExecProcessor] - Ignoring: "'/.\bin\scripted_inputs\ftr_lookups.py'" ERROR ExecProcessor [57556 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py" return super(Collection, self).get(name, owner, app, sharing, **query) ERROR ExecProcessor [57556 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py" splunklib.binding.HTTPError: HTTP 404 Not Found -- jmx_tasks does not exist Input is configured on Heavy Forwarder and works finee, but as in https://docs.splunk.com/Documentation/AddOns/released/JMX/Hardwareandsoftwarerequirements we have installed the addon also on Search Heads and we're not sure sure what to adjust/change. Does anyone have any idea on how to get rid of these errors? Greetings, Justyna
Hi Team, Is it possible to suppress  HTTP ERROR code eg (404) for specific URL instead of  suppressing all 404 Error code of the Application?
In ITSI, when triggering the email alert action via NEAP, Splunk ITSI always add a footer text to the mail body. We remove the footer text in the email alert action config gui and press save, but w... See more...
In ITSI, when triggering the email alert action via NEAP, Splunk ITSI always add a footer text to the mail body. We remove the footer text in the email alert action config gui and press save, but when we open the config again then Splunk has added the footer again. There is no footer added in the general mail setting in Splunk.
hello everyone Now I have been getting cluster Maps and Choropleth Maps generated , but a few issues with them. q1.when I add the same command from search app to the panel in the dash I loose all t... See more...
hello everyone Now I have been getting cluster Maps and Choropleth Maps generated , but a few issues with them. q1.when I add the same command from search app to the panel in the dash I loose all the state/regions names too!! works with the zoom function, is that ok? 2.  query: why do I have multiple tiles of the same regions running through how can I just create the view where I can see regions only where events have occurred? Screenshot attached I know the legend doesn't match the map as values show 0, but they change and seem to be ok after 10/15 mins, I dont know why!! I am trying to search for failed/successful applications logins by region/city/or country. my query:   index=a sourcetype=ab | iplocation ip | search status=failure AND connectionname=" ABwebsite" | stats count by Country| geom geo_countries allFeatures=True featureIdField=Country   if I don't add ip, no values populate on the map, there's just color.   Thankyou for looking into the query.  
Hello,  would it be possible to add a sencond email from the customer support account, in order to create new tickets from a secondary email address from other domain ?   Thanks and regards,  ... See more...
Hello,  would it be possible to add a sencond email from the customer support account, in order to create new tickets from a secondary email address from other domain ?   Thanks and regards,   
Splunk search was disabled because we exceeded the quote for 45 days. So we bought another license to add 10GB to our license. Applied the license fine. Yet still I cant search due to violation. I ... See more...
Splunk search was disabled because we exceeded the quote for 45 days. So we bought another license to add 10GB to our license. Applied the license fine. Yet still I cant search due to violation. I restarted the license manager server, the indexers but nothing. I'm under the license limit now, what do I have to do to enable searching again?
I need visualisation with box plot graph.IS this feature available in Dashboard studio
i have multiple dashboards with respect to same category  i need to create one main page with tabs for all Dashboards   Thanks in advance 
Hi, I am looking to grab a hand at turning 8 product charts into one table with Sparkline's if possible for trend tracking. I am currently using Trellis split on my dashboard to populate these 8 li... See more...
Hi, I am looking to grab a hand at turning 8 product charts into one table with Sparkline's if possible for trend tracking. I am currently using Trellis split on my dashboard to populate these 8 line charts showing the number of hits per month over the course of 12 months for which product. My data is stored on a lookup table.csv. My date field is stored as 04/02/2022 0:00 (4th feb). ProductType has things like - Candles, Teaset, Books I would instead prefer to show the Products in one table with a trendline/sparkline for each product tracking the last 12 months.  To get the trellis working i currently use the below. Which seems to work well and as needed with expected results.  | inputlookup XXX.csv | search ProductType="*" | search ProductDate="*2022*" | eval Date=strftime(strptime(ProductDate,"%d/%m/%Y"),"%b-%y") | chart count(ProductType) by Date, ProductType limit=0 | fields - OTHER, "-" | eval rank=case(ProductDate like "Jan-%",1,ProductDate like "Feb-%",2,ProductDate like "Mar-%",3,ProductDate like "Apr-%",4,ProductDate like "May-%",5,ProductDate like "Jun-%",6,ProductDate like "Jul-%",7,ProductDate like "Aug-%",8,ProductDate like "Sep-%",9,ProductDate like "Oct-%",10,ProductDate like "Nov-%",11,ProductDate like "Dec-%",12,1=1,13) | rex field=ProductDate "-(?<rank_year>\d+)" | sort 0 rank_year, rank | fields - rank rank_year However, when trying to get the sparklines/trendlines working using the below two attempts i do not get the results required. All Sparklines show a value of 0 - yet there are results for these fields being purchased on all these diff dates.  i have changed the search times, tried to add buckets, spans... even eval _time over Date and not having much luck.  | inputlookup XXX.csv | search ProductType="*" | search ProductDate="*2022*" | eval Date=strftime(strptime(ProductDate,"%d/%m/%Y"),"%b-%y") | chart sparkline count(Date) by ProductType, ProductDate limit=0 | fields - OTHER, "-" | eval rank=case(ProductDate like "Jan-%",1,ProductDate like "Feb-%",2,ProductDate like "Mar-%",3,ProductDate like "Apr-%",4,ProductDate like "May-%",5,ProductDate like "Jun-%",6,ProductDate like "Jul-%",7,ProductDate like "Aug-%",8,ProductDate like "Sep-%",9,ProductDate like "Oct-%",10,ProductDate like "Nov-%",11,ProductDate like "Dec-%",12,1=1,13) | sort 0 rank_year, rank | fields - rank rank_year And  | inputlookup XXX.csv | search ProductType="*" | search ProductDate="*2022*" | eval Date=strftime(strptime(ProductDate,"%d/%m/%Y"),"%d/%m/%Y") | chart sparkline count(ProductDate) by AppType limit=0 I believe i am going wrong with the date eval but have tried a fair few combos now with nearly all same results with sparklines always showing 0.  I have a about a years worth of data i want to track in the one visual table ( Very similar to how splunk does there own EQ example. ( to many products to show nicely on a line graph).  Thanks
I am performing two searches in an attempt to calculate the duration, but am having some issues. Here is what I have working so far. Im getting results but they are in two different rows when I see... See more...
I am performing two searches in an attempt to calculate the duration, but am having some issues. Here is what I have working so far. Im getting results but they are in two different rows when I see results, I was expecting for them to be in one row to be used to calculate the duration ? What am I missing... index=anIndex sourcetype=aSourceType (aString1 AND "START of script") | eval startTimeRaw=_time | append [search index=anIndex sourcetype=aSourceType (aString1 AND "COMPLETED OK") | eval endTimeRaw=_time ] | table startTimeRaw, endTimeRaw
I am using the below search to first get the difference in time everytime I see an event which has boot timestamp in it and using it first get the difference and then get the average of it by host.I ... See more...
I am using the below search to first get the difference in time everytime I see an event which has boot timestamp in it and using it first get the difference and then get the average of it by host.I am able to get the result correctly if I do one host per search like host=abc but if I use a wildcharacter for all hosts then I see the results are different (host=*) .I am assuming someother hosts having the events at same time is causing the issue .How to get the correct results for all hosts at a time . I get the time value as 11:50:58.59 if I use only host=abc but when I want to list all hosts (host=*.)for host abc I see value 00:18:18.67  index=abc "Boot timestamp" host=abc | eval _time=strptime(Boot_Time,"%Y-%m-%d %H:%M:%S") | reverse | delta _time as difference_secs | table _time difference_secs host | stats avg(difference_secs) as average by host | eval average=round(average,2) | eval time=tostring(average, "duration") is it possible to get all hosts average or it can be only individual .   Thanks in Advance