All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello! In my org currently, we have open SSL certs so we are going to self-signed certs or trusted certs going to install. So now I can take this forward? 1. What are the formats(.cert/ .pem/ anot... See more...
Hello! In my org currently, we have open SSL certs so we are going to self-signed certs or trusted certs going to install. So now I can take this forward? 1. What are the formats(.cert/ .pem/ another) of certs required to get install on windows os? 2. What are the prerequisites? 3. How to install? where to install? need complete information.  
I've created a dropdown input field that shows the user accounts that are locked out   And this is the search string that i'm using for my dashboard panel to display the user accounts that are ... See more...
I've created a dropdown input field that shows the user accounts that are locked out   And this is the search string that i'm using for my dashboard panel to display the user accounts that are locked out   When I select each of the user accounts listed below, the dashboard panel doesn't show them. How do I fix this?  
Hi all, I have X number of data models in the search head that I want to get usage information about. Is there a way to get a list of users who accessed them?
I need help on doing cumulative percentiles, such as p90, over a period of time. This is different from rolling averages or taking the p90 of individual spans of time. For example, I'm trying to cal... See more...
I need help on doing cumulative percentiles, such as p90, over a period of time. This is different from rolling averages or taking the p90 of individual spans of time. For example, I'm trying to calculate the cumulative, rolling p90 over a month. Here's a table that illustrates what I want: Day # P90 Value Calculation 2020-07-01 (Day 1) p90 of all events on day 1 2020-07-02 (Day 2) p90 of all events on day 1 and day 2 2020-07-03 (Day 3) p90 of all events from day 1 to day 3 ... ... 2020-07-31 (Day 31) p90 of all events during the month (day 1 - 31)   If I wanted to do P90 for individual days, it's very simple:  base_query | timechart span=1d p90(latency). Unfortunately, that's not what I want. There are 100,000+ events each day. I tried using streamstats but didn't get the expected results. I suspect that this is because streamstats only has a maximum window size of 10,000 events. My failed approach to using streamstats was to take the cumulative p90 of every event over a time period, and then return the final event of each day in that period:  base_query | bin span=1d _time as day | streamstats p90(latency) as p90latency | dedup 1 day | sort by _time | table _time, p90latency     Can anyone show me how to do a cumulative p90 in Splunk? I'm using an imperfect workaround of cumulative weighted average p90 values, but it's just an approximation and I'd like to have the real deal instead.
I am currently getting this failure when running my add-on through appinspect: check_inputs_conf_spec_stanzas_has_python_version_property My README/inputs.conf.spec   [autofocus_export://<nam... See more...
I am currently getting this failure when running my add-on through appinspect: check_inputs_conf_spec_stanzas_has_python_version_property My README/inputs.conf.spec   [autofocus_export://<name>] python.version = python3 label = interval =   AppInspect Report:   Modular input "autofocus_export" is defined in README/inputs.conf.spec, python.version should be explicitly set to python3 under each stanza. File: README/inputs.conf.spec Line Number: 1   Current documentation for inputs.conf.spec doesn't mention using python.version. Any help would be appreciate
The goal is to find the delay between the time sender sents the mail and recipient receive the mail , if the delay is more than 10 mins then alert Options tried: Message tracking logs C:\Program ... See more...
The goal is to find the delay between the time sender sents the mail and recipient receive the mail , if the delay is more than 10 mins then alert Options tried: Message tracking logs C:\Program Files\Microsoft\Exchange Server\V14\TransportRoles\Logs\MessageTracking in exchange server2010. But the logs didn provide the actual time when the user sent the email, also the original IP of the sender is replaced with LB/Exchange server/relay server/firewall. So now I looking for other options. One of them is using Splunk stream.  Please provide your suggestions.
I would like to set some constant values for use in tags in multiple panels of my dashboard (earliest time, latest time, sampling rate, etc.). This dashboard needs to run at pre-scheduled time and se... See more...
I would like to set some constant values for use in tags in multiple panels of my dashboard (earliest time, latest time, sampling rate, etc.). This dashboard needs to run at pre-scheduled time and send a pdf report. I set the tokens in <init> element at the beginning of the dashboard. However, per article https://docs.splunk.com/Documentation/Splunk/8.0.5/Viz/tokens, PDF scheduling is disabled for dashboards and forms that include an <init> element. This is what I am experiencing; the dashboard renders ok, but PDF job fails. Is there another option to define constant values that can be used in tags in other panel queries? I am deploying these dashboards in enterprise environment, and am looking for a solution within dashboard xml. Creating any additional configuration files is not easy to deploy or modify. Current solution (fails for PDF generation): <init> <set token="sampling">20</set> <set token="earliest">@w0-7d</set> <set token="latest">@w0</set> </init> ... <query>index=xxxx | stats count by internal build </query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <sampleRatio>$sampling$</sampleRatio>
In Splunk MLTK, how do I convert Target variables back to original levels after applying Standard Scaler pre-processing method. What is SPL command perform this conversion back to original levels 
LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4732 Group: Security ID: BUILTIN\Administrators Group Name: Administrators Group Domain: Builtin I am interested in Repor... See more...
LogName=Security SourceName=Microsoft Windows security auditing. EventCode=4732 Group: Security ID: BUILTIN\Administrators Group Name: Administrators Group Domain: Builtin I am interested in Reporting on the above Event for only Windows Server Systems?  Searching for the EventCode returns results for all Windows 10 systems as well. Can someone give me an example of how to cross reference those results with OS Type?  Thank you!
Hi! i've been trying to regex some part of the windows events to save license. Many windows events contains a large part of text that begins with "This event is generated". I've edited props.conf: ... See more...
Hi! i've been trying to regex some part of the windows events to save license. Many windows events contains a large part of text that begins with "This event is generated". I've edited props.conf: [source::WinEventLog:Security] TRANSFORMS-removedescription = removeEventDesc1 and transforms.conf: [removeEventDesc1] LOOKAHEAD = 16128 REGEX = (?msi)(.*)This event is generated DEST_KEY = _raw FORMAT = $1 (based on this link https://www.hurricanelabs.com/splunk-tutorials/windows-event-log-filtering-design-in-splunk) But isn't working. There is another way to do this? I've installed forwarders on my windows systems, and already blacklisted events with inputs.conf (that works) Thanks in advance and sorry for my english, im from Paraguay.
Hi, As the title suggest where do I find the older release 6.5.2 , the RPM package for Splunk Enterprise. We are finally gonna upgrade, but we need to test restore functionality in a test environ... See more...
Hi, As the title suggest where do I find the older release 6.5.2 , the RPM package for Splunk Enterprise. We are finally gonna upgrade, but we need to test restore functionality in a test environment, upgrade there and then run it on production if all configs translate without issues.
Need some help with a query Sample Data:   { id: “123”, start_time: “2020-08-01 15:00:00”, end_time: “2020-08-01 16:00:00”, status: “FAIL” } { id: “124”, start_time: “2020-08-01 16:05:00”, ... See more...
Need some help with a query Sample Data:   { id: “123”, start_time: “2020-08-01 15:00:00”, end_time: “2020-08-01 16:00:00”, status: “FAIL” } { id: “124”, start_time: “2020-08-01 16:05:00”, end_time: “2020-08-01 16:30:00”, status: “SUCCESS”, original_id: “123” }   Expected output (in table format) should only have 1 record: id start_time end_time status 123 2020-08-01 15:00:00 2020-08-01 16:30:00 SUCCESS   This event shows data from id=123, but overrides some field like end_time and Status from latter event. Goal : When we have data where id=original_id then we override some fields from latter event to former  event.
I would like to put together a graph with the difference of values ​​as a percentage, so I can use the single value and if it is above 70% it would turn red.       index="teste" "ProcessInbou... See more...
I would like to put together a graph with the difference of values ​​as a percentage, so I can use the single value and if it is above 70% it would turn red.       index="teste" "ProcessInboundEmail" | timechart span=10m dc(id_email) as ProcessInboundEmail | appendcols [search index="teste""submitInboundEmail" ended | timechart span=10m dc(id_email) as submitInboundEmail] | eval diff = ProcessInboundEmail - submitInboundEmail | eval diff=if(ProcessInboundEmail< submitInboundEmail, diff * -1, diff)         can you help me?  
A log subscription is set on the Cisco ESA appliance (IronPort Text Mail Logs) which is set to forward to a syslog-ng server, which then writes to a unique file. The inputs.conf is configured to moni... See more...
A log subscription is set on the Cisco ESA appliance (IronPort Text Mail Logs) which is set to forward to a syslog-ng server, which then writes to a unique file. The inputs.conf is configured to monitor the file path. Splunk ingests the data, but fields are not extracting. Has anyone ran into this issue and has a workaround?
I want to stop all remote logins to a Splunk server. To do this, I added the following to /etc/system/local/server.conf (as documented in https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Serv... See more...
I want to stop all remote logins to a Splunk server. To do this, I added the following to /etc/system/local/server.conf (as documented in https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Serverconf allowRemoteLogin = never After restarting Splunk, web console is still accessible remotely. I also commented out the following in /etc/system/default/server.conf, to rule out a conflict, but issue persists: # allowRemoteLogin=requireSetPassword What am I missing?
We have clients that use Splunk Cloud and they would like to pull our data into their Splunk via REST api. For Splunk Enterprise, we suggested the REST api modular input in Splunkbase. What is the eq... See more...
We have clients that use Splunk Cloud and they would like to pull our data into their Splunk via REST api. For Splunk Enterprise, we suggested the REST api modular input in Splunkbase. What is the equivalent to this in Splunk cloud? In other words, what is the best way to pull JSON data into Splunk cloud from a REST API ?
I am trying to get the addon for CNAE to work   I have configured according to guideline, but I don’t get any data into Splunk   The splunkd.log shows me the errors:   08-04-2020 15:03:27.963 +... See more...
I am trying to get the addon for CNAE to work   I have configured according to guideline, but I don’t get any data into Splunk   The splunkd.log shows me the errors:   08-04-2020 15:03:27.963 +0200 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA_cisco-candid/bin/collectCandid.py -candid"     main(sys.argv[1]) 08-04-2020 15:03:27.963 +0200 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA_cisco-candid/bin/collectCandid.py -candid"   File "/opt/splunk/etc/apps/TA_cisco-candid/bin/collectCandid.py", line 251, in main 08-04-2020 15:03:27.963 +0200 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA_cisco-candid/bin/collectCandid.py -candid"     candid_credentials = _getCredentials(sessionKey) 08-04-2020 15:03:27.963 +0200 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA_cisco-candid/bin/collectCandid.py -candid"   File "/opt/splunk/etc/apps/TA_cisco-candid/bin/collectCandid.py", line 34, in _getCredentials 08-04-2020 15:03:27.963 +0200 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA_cisco-candid/bin/collectCandid.py -candid"     if entities: 08-04-2020 15:03:27.963 +0200 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA_cisco-candid/bin/collectCandid.py -candid" UnboundLocalError: local variable 'entities' referenced before assignment   I have disabled SSL.   Any hint what I should look for ?   Thanks!  
Hi There, Thank you for stop by and helping. I've a regex which extracts all URLs and domains from given field, this regex is working fine with the following search query | makeresults |eval body=... See more...
Hi There, Thank you for stop by and helping. I've a regex which extracts all URLs and domains from given field, this regex is working fine with the following search query | makeresults |eval body="An issue on an object you are monitoring-- details here: https://SOME1-ORIONWB01:443/Orion/View.aspx?NetObje?a=b&c=dct=I:54020.<br/>View full alert details here: https://SOME2-ORIONWB01:443/Orion/View.aspx?NetObject=AAT:90460" |rex max_match=0 field=body "(?<URL>(?<proto>(((https?|ftp|gopher|telnet|file)(:\/\/))|(www\.)))(?<domain>[:\w\.-]+)[\w\&\=\/\?\.\:]+)"   but, when I put the same regex in props.conf (Settings>> Fields » Field extractions » domain_extractor, it is extracting only first URL and domain. I see the manual search "has max_match=0", I could not find similar config for "Filed extractions", Any suggestions? if there is any such config  OR do we need to do this in different way? TIA.        
I have a table that shows the number of logs by severity over each host.  I want to be able to rearrange the severity columns into a specific order but can't figure out how.  I tried using custom s... See more...
I have a table that shows the number of logs by severity over each host.  I want to be able to rearrange the severity columns into a specific order but can't figure out how.  I tried using custom sort field via eval but didn't seem to work. Current SPL   index=foo sourcetype=logs | chart count over server by severity   Current Table Order   server major info critical minor serverA 5 10 8 6 serverB 22 5 13 9     Desired Table Order   server critical major minor info serverA 8 5 6 10 serverB 13 22 9 5  
Hey Splunk world,   After learning that the nullQueue option for eliminating unneeded data is required to be installed on an indexer as a props/transforms combo vs. placing it on a UF.   I had on... See more...
Hey Splunk world,   After learning that the nullQueue option for eliminating unneeded data is required to be installed on an indexer as a props/transforms combo vs. placing it on a UF.   I had one more question regarding this as I come into this roadblock quite a few times in our Splunk Cloud environment. Since we are on Splunk Cloud v 7.2, we run into rolling restarts on sourcetype additions and app installs. The way the folks in the admin realm have done things in the past is to add a props/transforms on a custom app, install it on the SH + Indexers and then allow those settings to globally apply across the environment on the specific sourcetype.   Is there a different way of implementing the below props/transforms combo into our environment without causing the rolling restart via app install? I usually add extractions via the GUI and this allows me to not cause a rollin restart, but with the format required for this data manipulation, I'm not sure if I'm able to use the GUI. Mainly wanting to save time and not wait until our Saturday maintenance window to make a change. Splunk Cloud does not allow access to the server CLI per SH or indexer. PROPS [linux_audit] TRANSFORMS-null= setnull TRANSFORMS [setnull] REGEX = type=CWD | key=\"delete\" DEST_KEY = queue FORMAT = nullQueue Thank you!