All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

What Splunk Components would you include in your Splunk on AWS backups ? How about HA (high availability) and Disaster Recovery measures?
Hello, I have a table with background blue, and letter white,  this table have a drilldown by row When I move the mouse in the different rows, It appears with background-color: ECF8FF (attached exa... See more...
Hello, I have a table with background blue, and letter white,  this table have a drilldown by row When I move the mouse in the different rows, It appears with background-color: ECF8FF (attached example) I have tried with different ways with <html><style> but I was able to fill with a border to the selected row, but I could not change color font, or background color (instead ECF8FF a green for example) of the selected row I have tried many solutions but It didn't work Could you help me, please? Thanks a lot¡¡¡
        I am searching for logs, and when I click on 'Show source' they are more logs associated with it Is there a way to have these other logs to show in the events? I cannot format Show Sourc... See more...
        I am searching for logs, and when I click on 'Show source' they are more logs associated with it Is there a way to have these other logs to show in the events? I cannot format Show Source as easily as the search events 
Splunk version 7.3.6 When I run    | dbinspect index=*   I receive the expected output but only for hot/warm buckets.  Is this normal behavior? Is there any way to obtain the status on cold buc... See more...
Splunk version 7.3.6 When I run    | dbinspect index=*   I receive the expected output but only for hot/warm buckets.  Is this normal behavior? Is there any way to obtain the status on cold buckets using dbinspect? I have confirmed that there are in fact cold buckets in the expected directories and that they are searchable by Splunk.
I have a windows 2019 SRV and will be installing splunk forwarder 8.0.4 I have a firewall and I have set the IP of this new server as it's syslog server. It's my understanding that the sonicwall sen... See more...
I have a windows 2019 SRV and will be installing splunk forwarder 8.0.4 I have a firewall and I have set the IP of this new server as it's syslog server. It's my understanding that the sonicwall sends this syslog information over port 514.   So how do I setup my syslog server w/ the Universal Forwarder to ingest and forward this data on to the indexer. Or do I need to setup a "listener" outside of splunk on the new syslog server to get the data to a log file and then simply use the forwarder to grab that log file and send to indexer?
Hello! I am working with version 4.1.3 (latest) of the Splunk Add-on for Microsoft Cloud Services that is installed on Splunk Enterprise 8.0.5.  My objective is to pull data from an Azure Event Hub.... See more...
Hello! I am working with version 4.1.3 (latest) of the Splunk Add-on for Microsoft Cloud Services that is installed on Splunk Enterprise 8.0.5.  My objective is to pull data from an Azure Event Hub.  I have configured an Azure App Account as well as an Azure Event Hub input, but when enabled the data does not come through.  Instead I get an unauthorized access error stating that listen claims are required: 2021-05-26 13:40:40,305 level=WARNING pid=15598 tid=Thread-1 logger=uamqp.receiver pos=receiver.py:get_state:270 | LinkDetach("ErrorCodes.UnauthorizedAccess: Unauthorized access. 'Listen' claim(s) are required to perform this operation. Resource: 'sb://<namespace>.servicebus.windows.net/<event_hub_name>/consumergroups/$default/partitions/0'. TrackingId:786bfa2366b4413aa87b20c898f7f316_G38, SystemTracker:gateway5, Timestamp:2021-05-26T13:40:40") I referred to the troubleshooting section of the manual, but it only says to ensure that all IDs are correct, which I checked and rechecked numerous times.  The correct claims are also configured but I still run into the same issue.  I also found this thread which had the issue, but the resolution does not apply to my case. How can I get around this issue? Thank you and best regards, Andrew
I had an older version of eStreamer installed, when I try to change to an new FMC it failed. When I tried to upgrade to the lastest eStreamer 4.6.0 (#3662), I did not see the Set-Up under the Action... See more...
I had an older version of eStreamer installed, when I try to change to an new FMC it failed. When I tried to upgrade to the lastest eStreamer 4.6.0 (#3662), I did not see the Set-Up under the Actions Launch app | Edit properties | View objects | View details on Splunkbase (Shouldn't there be a Set-Up  at the front of the options) I thought maybe it was because it was already configured.    I completely remove the app and re-installed clean. I still don't see the Set-up  to allow me to configure the FMC and the cert.   Any suggestions?  
I have log which has time stamp, tag, and i calculating how many time has been occurred per day. i want to get results if the events has been continuously happened on last 4 days but its returns for ... See more...
I have log which has time stamp, tag, and i calculating how many time has been occurred per day. i want to get results if the events has been continuously happened on last 4 days but its returns for last 5 days. As we see below 21st has no data but still it reported as time range selected was last 4 days.   index=* | eval epochtime=strptime(Log_Message_Time, "%m/%d/%Y %H:%M:%S") | eval Event_Date=strftime(epochtime, "%d-%m-%Y") | stats delim="," values(Tag) AS _Tag values(Buffer_Value) AS Buffer_Value values(diff) AS diff count AS Per_Day_Occurance BY Event_Date host | mvexpand Buffer_Value | mvcombine Log_Message_Tag | rename host AS Server | eventstats count AS Days BY Server | search Days>=4 | join type=left Server [|inputlookup pg_ld_production_servers | table Server Site] | table Site Server Event_Date Log_Message_Tag Per_Day_Occurance diff | sort Event_Date | rename Log_Message_Tag AS "Historian Tag" Event_Date AS "Event Date"       host event date tag Occured per day             BELL-MESAPPBC1 20-05-2021 tag1,tag2,tag3 2           2 host 22-05-2021 tag2,tag4,tag5,tag1 3           3 host 23-05-2021 tag1 4           4 host 24-05-2021 tag2,tag3 5                                
Hi Team,   Need help in identifying how can we find the path/directory of my alers and reports..   For ex all my alerts and reports are stored in defualt.meta .... Where can I see this path/direc... See more...
Hi Team,   Need help in identifying how can we find the path/directory of my alers and reports..   For ex all my alerts and reports are stored in defualt.meta .... Where can I see this path/directory name from UI to prove this
I have the following search:   earliest=-1d@d latest=@d index=cdb_summary sourcetype=cfg_summary source=CDM_*_Daily_Summary | search hva=* | eval FailedSTIGs=mvsort(split(FailedSTIGs,",")) | st... See more...
I have the following search:   earliest=-1d@d latest=@d index=cdb_summary sourcetype=cfg_summary source=CDM_*_Daily_Summary | search hva=* | eval FailedSTIGs=mvsort(split(FailedSTIGs,",")) | stats values(fismaid) as fismaid dc(asset_id) as Affected by FailedSTIGs,hva | lookup DHS_Expected_Checks "STIG ID" as FailedSTIGs output "Rule Title" | fit TFIDF "Rule Title" as rule_tfidf ngram_range=1-12 max_df=0.8 min_df=0.2 stop_words=english | fit KMeans rule_tfidf* k=8 |stats values(FailedSTIGs), values("Rule Title") by cluster     How can I add stop words to the stop_words argument? In python I would write the following:   from sklearn.feature_extraction import text stop_words = text.ENGLISH_STOP_WORDS.union(my_additional_stop_words)   Obviously I can't use python, but I am not familiar enough with Splunk searches to know if it's possible to modify the english keyword in a similar way so that it takes in additional words like "Windows"
So I have been using the following SPL to hide panels based on a token from a dropdown: <panel depends="$operating_system$"> However,  I want a few panels to only appear when "ALL" is selecte... See more...
So I have been using the following SPL to hide panels based on a token from a dropdown: <panel depends="$operating_system$"> However,  I want a few panels to only appear when "ALL" is selected, not when a individual Operating System is selected.  Is that possible using the "Depends" command?  If not, how? Thanks!
Hello there.   I've a series of GET/POST request. The request is to have inside the dashboard a stacked column graph that shows, per server, the values of successfull request below and above them ... See more...
Hello there.   I've a series of GET/POST request. The request is to have inside the dashboard a stacked column graph that shows, per server, the values of successfull request below and above them the failed request (with status > 500)   What I'm doing is the following: | chart count(eval(tonumber(status)>=500)) as internal_errors, count as total_requests by host | eval safe_requests=total_requests-internal_errors | table host, safe_requests, internal_errors  Is there any better way to do that? Second question: In the result I'm having, I'm having the internal_errors displayed before the larger (hope that stays so) successfull count of total requests... You can see in the first 2 servers... Any way to change this order? Third question: Is it possible to define 2 sets of color (eg Green and Red) for stacked values?   Edit: Thanks for any reply!
A quick search didn't find anything. I am looking to determine what the most used and avg Search window is. I.e. how far back are most of my users actually looking. Is this possible?
I am new to splunk, and trying to create weekly report, which will always give data for previous week from (Sunday to Saturday, UTC). For example: if I select "week x" from the dropdown, it should re... See more...
I am new to splunk, and trying to create weekly report, which will always give data for previous week from (Sunday to Saturday, UTC). For example: if I select "week x" from the dropdown, it should reflect data for the last week (Sunday to Saturday only). As of now I am getting data 1-current day + 6 previous days( total 7 days) . Please help with the logic
So IIS logs are usually delimited by a space between every other field, however I have recently realized that when a certain field value contains a quotation mark in this example it would be the "cs_... See more...
So IIS logs are usually delimited by a space between every other field, however I have recently realized that when a certain field value contains a quotation mark in this example it would be the "cs_uri_query"  field , the field value starting with a quotation mark grabs all the remaining rest of the log included as its value even though there would be spaces included in the value itself. An example log of the problem from what I mean could be found bellow:   2021-05-15 14:02:58 11.11.11.11 GET /WebID/IISWebAgentIF.dll postdata="><script>foo</script> 55000 - 11.11.11.111 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 64 0 11.11.11.111:60754   My Transforms with field delimeter set as " " is as follows:   date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken X-Forwarded-For   Thank you for reading,
Hi, I wanted to divide each hostname by using the count of "documentcompletetime" field.   index=nextgen sourcetype=lighthouse_json sourcetype=lighthouse_json datasource=webpagetest step="Homepage... See more...
Hi, I wanted to divide each hostname by using the count of "documentcompletetime" field.   index=nextgen sourcetype=lighthouse_json sourcetype=lighthouse_json datasource=webpagetest step="Homepage" | chart count(url) by hostname   The output of the query is as below...   The count(url) column I wanted to divide with the count of "documentcompletetime" field. This field is available in the events, screenshot for the same I wanted to divide each and every value of count(url) column with "documentcompletetime" count which is 48 in this example. documentcompletetime value is not a static value. The value changes based on the test timings. Can you guys please help?
Hi, I'm not seeing a way to monitor the EKS fargate metrics and logs to troubleshoot and find the root cause of issues. Can you help me which agent is needed since fargate is serverless way of hos... See more...
Hi, I'm not seeing a way to monitor the EKS fargate metrics and logs to troubleshoot and find the root cause of issues. Can you help me which agent is needed since fargate is serverless way of hosting containers so we dont use daemonsets to monitor the metrics and logs. Please help me clear steps to solve the requirement. Regards, manojkumar tenali.
Is there a way to show trend with Horseshoe Meter viz in the similar way as with single viz?
Hello folks, Thanks to visit my question. Users are getting two kinds of errors say A and B one at a time. Both cannot happen simultaneously. I want to get no of users facing both types of error. ... See more...
Hello folks, Thanks to visit my question. Users are getting two kinds of errors say A and B one at a time. Both cannot happen simultaneously. I want to get no of users facing both types of error. Can anyone please suggest any possible query to get this data ? Thanks in advance.
Hi All, Does AppDynamics Machine Agent compatible with Ubuntu 20.04 OS? If I searched in the compatibility docs, the latest version supported is Ubuntu 18.04 - https://docs.appdynamics.com/21.5/en... See more...
Hi All, Does AppDynamics Machine Agent compatible with Ubuntu 20.04 OS? If I searched in the compatibility docs, the latest version supported is Ubuntu 18.04 - https://docs.appdynamics.com/21.5/en/infrastructure-visibility/machine-agent/machine-agent-requirements-and-supported-environments Is there anyone here who has tried before installing a machine agent with Ubuntu 20.04? Thanks for the reply! Regards, Yan