All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a few web monitor inputs configured on a Heavy Forwarder to ping a url every minute. I then set up alerts on this to alert me if I get less than 25 pings with response_code=200 within 30 minut... See more...
I have a few web monitor inputs configured on a Heavy Forwarder to ping a url every minute. I then set up alerts on this to alert me if I get less than 25 pings with response_code=200 within 30 minutes. I have been getting a ton of false alerts with these. Most of the time when I check the alert, I still see a high number of successful pings for the past 30 minutes that would cause the alert NOT to fire. Every now and then there are actually a low enough number of successful pings to fire the alert, but when I check my url everything is fine. Is there any reason that there would be a delay in the HF sending these pings every minute? Thanks in advance
I am wondering how whitelist lookups concept is working in threathinting app? is it something we need to push the data in everytime manually or is there any automatic way to popup the required fiel... See more...
I am wondering how whitelist lookups concept is working in threathinting app? is it something we need to push the data in everytime manually or is there any automatic way to popup the required fields ?
If I have the data in following format: time session event t1 session1 actionA t2 session1 actionB t3 session1 actionC... See more...
If I have the data in following format: time session event t1 session1 actionA t2 session1 actionB t3 session1 actionC t4 session1 actionA t5 session2 actionB t6 session2 actionC want to write a splunk query to transform it to this format: from to count timetaken actionA actionB 1 (t2-t1) actionB actionC. 2 (t3-t2) + (t5+t6) actionC actionA 1 (t4-t3) can someone recommend an expression for this?
hello I am currently configuring SPLUNK with LDAP / AD . Splunk server is installed on a centos 7 . Splunk version 7.1 splunk web must be use by users in GROUP1 only GRO... See more...
hello I am currently configuring SPLUNK with LDAP / AD . Splunk server is installed on a centos 7 . Splunk version 7.1 splunk web must be use by users in GROUP1 only GROUP1 is mapped with admin role minos is existing only in AD , not in Splunk web ... When user minos is not a member of GROUP1 it is not listed and does not appear in the log. As soon as minos has beed added in GROUP1, then it is in the log file . "Found matching group="GROUP1" with mapped roles" . It seems to be working as expected But 1) I have the following error message "Could not get roles for user that does not exist: minos" . What am I doing wrong ? What is missing and where ? Any suggestion ? of course I looked around in the forum ... but nothing obvious 2) There is also a user which is not existing in LDAP. And I am wondering where it does come from before, I removed any reference in the local.meta file user="nobody" was not cached .... Could not find user="nobody" with strategy="advm" Thanks Extract of the log splunk log file [...] 4-09-2020 15:03:45.002 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Initializing with LDAPURL="ldap://:389" 04-09-2020 15:03:45.002 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Attempting bind as DN="cn=administrador,cn=users,dc=XXX,dc=com" 04-09-2020 15:03:45.004 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Bind successful 04-09-2020 15:03:45.004 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Attempting to search subtree at DN="cn=users,dc=XXXX,dc=com" using filter="(&(samaccountname=minos)(memberof=CN=GROUP1,CN=Builtin,DC=XXXX,DC=com)(displayname=*))" 04-09-2020 15:03:45.007 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Search duration="3.220 milliseconds" 04-09-2020 15:03:45.007 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Loading entry attributes for DN="CN=minos,CN=Users,DC=XXX,DC=com" 04-09-2020 15:03:45.007 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Adding attribute="displayName" with value="minos" 04-09-2020 15:03:45.007 +0000 DEBUG AuthenticationManagerLDAP - Attempting to get roles for user="minos" with DN="CN=minos,CN=Users,DC=XXXX,DC=com" in strategy="advm" 04-09-2020 15:03:45.007 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Attempting to search subtree at DN="cn=builtin,dc=XXXX,dc=com" using filter="(&(member=CN=minos,CN=Users,DC=XXXX,DC=com)(cn=*))" 04-09-2020 15:03:45.009 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Search duration="1382 microseconds" 04-09-2020 15:03:45.009 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Loading entry attributes for DN="CN=GROUP1,CN=Builtin,DC=XXX,DC=com" 04-09-2020 15:03:45.009 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Adding attribute="cn" with value="GROUP1" 04-09-2020 15:03:45.009 +0000 DEBUG AuthenticationManagerLDAP - Mapping groups for user="minos" for group DN="CN=GROUP1,CN=Builtin,DC=XXX,DC=com" 04-09-2020 15:03:45.009 +0000 DEBUG AuthenticationManagerLDAP - "Found matching group="GROUP1" with mapped roles" 04-09-2020 15:03:45.009 +0000 DEBUG AuthenticationManagerLDAP - Successfully filled info for user="minos" with realname="minos" and email="" in strategy="advm" 04-09-2020 15:03:45.009 +0000 DEBUG ScopedLDAPConnection - strategy="advm" Successfully performed unbind 04-09-2020 15:03:45.009 +0000 DEBUG AuthenticationManagerLDAP - Caching user="minos" with DN="CN=minos,CN=Users,DC=XXXX,DC=com" 04-09-2020 15:03:45.009 +0000 ERROR AuthenticationManagerSplunk - Could not get roles for user that does not exist: minos 04-09-2020 15:03:45.011 +0000 INFO UserManagerPro - Login failed for user="minos", elapsed time=0.001 seconds [...] here is my authenification.conf file [advm] SSLEnabled = 0 anonymous_referrals = 1 bindDN = cn=administrador,cn=users,dc=XXX,dc=com bindDNpassword = charset = utf8 emailAttribute = mail groupBaseDN = cn=builtin,dc=XXX,dc=com groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn host = nestedGroups = 0 network_timeout = 20 port = 389 realNameAttribute = displayname sizelimit = 1000 timelimit = 15 userBaseDN = cn=users,dc=XXXX,dc=com userBaseFilter = (memberof=CN=GROUP1,CN=Builtin,DC=XXX,,DC=com) userNameAttribute = samaccountname [authentication] authSettings = advm authType = LDAP [roleMap_advm] admin = GROUP1
I have a simple timechart showing a percentage of status that = success from the total count of phase=second found. index=logs phase=second | timechart span=7d count AS total count(eval(status="... See more...
I have a simple timechart showing a percentage of status that = success from the total count of phase=second found. index=logs phase=second | timechart span=7d count AS total count(eval(status="SUCCESS")) AS success | eval Percentage=round((success/total)*100,2) | table _time Percentage This report runs every 7days so it tells me the percentage for that week. _time Percentage 2018-05-17 31.91 2018-05-24 61.38 2018-05-31 11.36 Trying to calculate the Deltas from week-to-week. so an example would be like below _time Percentage 2018-05-17-2018-05-24 0.3191 - 0.6138 = -0.2947 change 2018-05-24-2018-05-31 0.6138 - 0.1136 = 0.5002 change I cannot seem to figure out how to subtract the values every 7 days from the previous value from 7 days. Thanks!
I have this search: host=app-dev-001 rehire OR terminating OR new_hire OR "changes supervisor" | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(_time) AS date | sort date | table date rehire term_u... See more...
I have this search: host=app-dev-001 rehire OR terminating OR new_hire OR "changes supervisor" | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(_time) AS date | sort date | table date rehire term_user new_hire super_change and I get results: date rehire term_user new_hire super_change 4/9/20 17:31 okaalsnd 4/9/20 17:31 nineanls 4/9/20 17:31 mcahmcui 4/9/20 17:31 ogrga 4/9/20 17:31 arjsgasp 4/9/20 17:31 cbldenia 4/9/20 17:31 rekenid 4/9/20 17:31 luchgoja 4/9/20 17:31 uhsig 4/9/20 17:31 huanecdc 4/9/20 17:31 erni 4/9/20 17:31 stlieez. 4/9/20 17:31 tmaonlhe. 4/9/20 17:31 joedbers. 4/9/20 17:31 inbhdrre. 4/9/20 17:31 grarcacm. 4/9/20 17:31 2loj. 4/9/20 17:31 vavmeass. 4/9/20 17:31 wuelnjoo. 4/9/20 17:31 mhabin 4/9/20 17:31 cleadmra 4/9/20 17:31 nenahna 4/9/20 17:31 nbveteen 4/9/20 17:31 (sonaliue) changes supervisor from enfkaoi/id=83802 to fakesuper/id=42 4/9/20 17:31 (adkcuohh) changes supervisor from mhanaesr/id=134685 to fakesuper/id=42 4/9/20 17:31 (kvganeng) changes supervisor from nbynae/id=88564 to fakesuper/id=42 4/9/20 17:31 (ccncecpo) changes supervisor from hkdywaav/id=68086 to fakesuper/id=42 4/9/20 17:31 (jefai) changes supervisor from gawzignh/id=1163 to fakesuper/id=42 4/9/20 17:31 (uralsa) changes supervisor from rjajaaay/id=527197 to fakesuper/id=42 But when I click on the visualization table I get an empty graph.
I am combining 3 source types. I've tried using |stats values() but can't seem to get it to work. Example of what I currently have written but it runs too slow. index=integration sourcetype... See more...
I am combining 3 source types. I've tried using |stats values() but can't seem to get it to work. Example of what I currently have written but it runs too slow. index=integration sourcetype=Incident | join type=left Assignment_Group [search index=integration sourcetype=Assignment | rename NAME AS Assignment_Group Team_Leader AS Leader_ID | join type=left Leader_ID [search index=integration sourcetype=ROLLUP_ORG_LEVELS | rename ID AS Leader_ID ]] | dedup Incident_ID | table Incident_ID Assignment_Group LVL3_MGR
We need to add a set of data model accelerations and we would like to understand the impact on the system. The following query helps but we would like to know if there is any other way - `dmc_se... See more...
We need to add a set of data model accelerations and we would like to understand the impact on the system. The following query helps but we would like to know if there is any other way - `dmc_set_index_introspection` host=<name> sourcetype=splunk_resource_usage component=PerProcess data.search_props.sid::* | `dmc_rename_introspection_fields` | `dmc_set_bin` | stats dc(sid) AS distinct_search_count by _time, type | `dmc_timechart` Median(distinct_search_count) AS "Median of search concurrency" by type
I am trying to connect with REST API and I am able to use this guide https://answers.splunk.com/answers/685730/can-i-use-rest-api-without-curl.html I can obtain the session key but on using that... See more...
I am trying to connect with REST API and I am able to use this guide https://answers.splunk.com/answers/685730/can-i-use-rest-api-without-curl.html I can obtain the session key but on using that, I still get an unauthorized error when trying to pull results of a search. I am going through a proxy server to make my request and avoid CORS issues. Any pointers woul dbe appreciated.
We have expired and deleted jobs that don't clear. Does switching the SH captain, on a regular basis, help with clearing up expired and deleted jobs?
The AccountExpires field in an AD log is described as: The date when the account expires. This value represents the number of 100-nanosecond intervals since January 1, 1601 (UTC). A value of 0 o... See more...
The AccountExpires field in an AD log is described as: The date when the account expires. This value represents the number of 100-nanosecond intervals since January 1, 1601 (UTC). A value of 0 or 0x7FFFFFFFFFFFFFFF (9223372036854775807) indicates that the account never expires. https://docs.microsoft.com/en-us/windows/win32/adschema/a-accountexpires The long string doesn't follow the standard unix epoch time, so the strftime function doesn't seem to apply. Does anyone know the formula for resolving this? Sample data Set: accountExpires, accountExpires_strftime, ActualExpiry 132066576000000000, 11:59.59 pm, Fri 12/31/9999, 03/07/2019 21:00 0, 01:00.00 am, Thu 01/01/1970, Never 131775408000000000, 11:59.59 pm, Fri 12/31/9999, 31/07/2018 21:00 131748624000000000, 11:59.59 pm, Fri 12/31/9999, 30/06/2018 21:00 131693328000000000, 11:59.59 pm, Fri 12/31/9999, 27/04/2018 21:00 Thanks in advance, Sheamus
I have a metadata search to detect when host stops sending logs. I'd like to change the timeframe so that I only see the hosts where Last_Time_Reported is between 1 - 90 days ago, I do not want to se... See more...
I have a metadata search to detect when host stops sending logs. I'd like to change the timeframe so that I only see the hosts where Last_Time_Reported is between 1 - 90 days ago, I do not want to see anything if last time reported was beyond that. When I change the time picker to 90 days, I am still seeing events way past 90 days prior. So I know that I need to change the query instead but I am not sure what exactly I should add. Can someone please help? Thank you! | metadata type=hosts index=* | where relative_time(now(), "-1d") > lastTime | convert ctime(lastTime) as Latest_Time | sort -lastTime | table host,Latest_Time | lookup assets.csv nt_host AS host OUTPUTNEW priority AS priority,bunit AS bunit | rename Latest_Time AS "Last Time Reported"
Hello, I have the following data in plain text format that contains several datetime values, it looks like this : XXXXXXXX201710101005582018101010055820191010100558 20171010100558 = date1 20... See more...
Hello, I have the following data in plain text format that contains several datetime values, it looks like this : XXXXXXXX201710101005582018101010055820191010100558 20171010100558 = date1 20181010100558 = date2 20191010100558 = date3 I have successfully configured props.conf to extract event timestamp from the first occurence (date1), using the following config : TIME_FORMAT = %Y%m%d%H%M%S MAX_TIMESTAMP_LOOKAHEAD = 22 TIME_PREFIX = .{8} Now, things got complicated because date2 and date3 could be null, either 1 of the two, or both. How can I configure properly props.conf, to handle the following rule : - looks for date2 first, if exists, use it as event timestamp - if date2 is null, looks for date3, if exists, use it as event timestamp - if date2 and date3 are null, use date1 as event timestamp Thanks in advance for your help.
I have a search which is detecting when host stops sending logs, then the search does a lookup against my assets lookup table file which is a KV store lookup to fetch the bunit and priority of the pa... See more...
I have a search which is detecting when host stops sending logs, then the search does a lookup against my assets lookup table file which is a KV store lookup to fetch the bunit and priority of the particular asset. The search works when the capitalization matches between the search results and the lookup table, but if they do not match exactly it will not fetch the bunit or priority. How can I make my search case insensitive so that it is able to match regardless of capitalization? This is my search: | metadata type=hosts index=* | where relative_time(now(), "-1d") > lastTime | convert ctime(lastTime) as Latest_Time | sort -lastTime | table host,Latest_Time | lookup assets.csv nt_host AS host OUTPUTNEW priority AS priority,bunit AS bunit | rename Latest_Time AS "Last Time Reported" Thank you!
Hi All, I have trying to upload the Splunk BOTS V2.0 Data into my Splunk Trial Version deployed on a windows 10 O.S but facing issues with it . I initially tried uploading the dataset as an ap... See more...
Hi All, I have trying to upload the Splunk BOTS V2.0 Data into my Splunk Trial Version deployed on a windows 10 O.S but facing issues with it . I initially tried uploading the dataset as an application as I did with the V1.0 dataset but that did not work out . Then i tried unzipping the .tar file directly into the following path C:\Program Files\Splunk\etc\apps\botsv2_data_set_attack_only but that too hasn`t worked . I have now run out of options and hence reaching out to the community . Also if the question has already been asked sorry for reposting it again .
Hello everyone! how to extract a field where there are different values, but which has not determined a value. I need to extract the values from the "Domain" field excluding the "Corp" and "Corp... See more...
Hello everyone! how to extract a field where there are different values, but which has not determined a value. I need to extract the values from the "Domain" field excluding the "Corp" and "Corp - West" values, but show me the rest. Domain = "Corp - West \ **OfficeABC \ Server *"* Domain = "Corp \ **OfficeXYZ \ Workstations *"* Domain = "Default *"* Ex. Log: 2020-04-06 18:54:30.000, _time="2020-04-06 18:54:30.0", ComputarName="XYZ001", Usuer="userx", Domain="Corp\OfficeXYZ\Workstations\", IP="54.110.130.34" 2020-04-06 18:59:10.000, _time="2020-04-06 18:59:10.0", ComputarName="XYZ101", Usuer="usera", Domain="Corp - West\OfficeABC\Servers\", IP="38.230.86.56" 2020-04-06 19:09:30.000, _time="2020-04-06 19:09:30.0", ComputarName="XYZ201", Usuer="userb", Domain="Default\", IP="179.28.186.78" Thanks in advance. James._/\_
I have an app on a deployment server that runs a script and has splunk ingest the output which is valid xml. I've added a props.conf on the Search Heads with KV_MODE=xml but no fields are being extra... See more...
I have an app on a deployment server that runs a script and has splunk ingest the output which is valid xml. I've added a props.conf on the Search Heads with KV_MODE=xml but no fields are being extracted. When I run | xmlkv at the end of my query it extracts all xml fields. Is there anything I'm missing that would cause Splunk not to extract the xml fields automatically? Thanks in advance
Hi I have a .csv file without header but with fixed fields which i would like to send to my Splunk server with the universal forwarder on the according Linux host. I understand that i need to c... See more...
Hi I have a .csv file without header but with fixed fields which i would like to send to my Splunk server with the universal forwarder on the according Linux host. I understand that i need to configure the inputs.conf on the universal forwarder and that i need to define the .csv on the indexer in props.conf like following: [mycsv] FIELD_DELIMITER=, FIELD_NAMES=field1,field2,field3,field4 However, it's not clear to me in which props.conf i need to define the above definition. I have found the following props.conf on my Indexer/Splunk server? /opt/splunk/etc/apps/splunk_internal_metrics/default/props.conf /opt/splunk/etc/apps/Splunk_TA_nix/default/props.conf /opt/splunk/etc/apps/search/default/props.conf /opt/splunk/etc/apps/SplunkLightForwarder/default/props.conf /opt/splunk/etc/apps/Splunk_TA_apache/default/props.conf /opt/splunk/etc/apps/sample_app/default/props.conf /opt/splunk/etc/apps/legacy/default/props.conf /opt/splunk/etc/apps/splunk_instrumentation/default/props.conf /opt/splunk/etc/apps/splunk_archiver/default/props.conf /opt/splunk/etc/apps/splunk_monitoring_console/default/props.conf /opt/splunk/etc/apps/learned/local/props.conf /opt/splunk/etc/system/default/props.conf Do i need to copy the props.conf from: /opt/splunk/etc/apps/search/default/props.conf to: /opt/splunk/etc/system/default/props.conf and then modify it with my above [mycsv] definition? Or do i need to modify the props.conf in: /opt/splunk/etc/apps/search/default/props.conf and leave it there? Or do i need to modify the props.conf in: /opt/splunk/etc/apps/Splunk_TA_nix/default/props.conf? Or is it any of the other props.conf? I only want to search the index after with the standard search in Splunk. Kind Regards and thanks in advance for any help
Hi, I have the Splunk HTML Dashboard below is the URL. "https://xxxxx/en-US/app/xxxx/xxx?form.tab=C4.MVX&form.field1=Conveyor%20DE%20Velocity&form.field1a=%22-6h%22&form.field1CR2=%22Crusher.2.MT... See more...
Hi, I have the Splunk HTML Dashboard below is the URL. "https://xxxxx/en-US/app/xxxx/xxx?form.tab=C4.MVX&form.field1=Conveyor%20DE%20Velocity&form.field1a=%22-6h%22&form.field1CR2=%22Crusher.2.MTR%20DE%20Velocity%22&form.field1aCR2=%22-6h%22&earliest=0&latest=" Dashboard contains form inputs Drown downs How can i filter meta characters(all special characters like( %, <,>) form user input or from URL to prevent XSS attack? Please help me out of this?
I have a dedicated server which is running syslog-ng and a universal forwarder. i want to set 3 things one of them dynamically: # /opt/splunkforwarder/etc/system/local/inputs.conf [monitor:///... See more...
I have a dedicated server which is running syslog-ng and a universal forwarder. i want to set 3 things one of them dynamically: # /opt/splunkforwarder/etc/system/local/inputs.conf [monitor:///data/syslog-ng/logs/u514/cisco/ios/*/*.log] sourcetype = syslog source = syslog-ng:udp514 host_segment = 7 The problem is that i cannot set source and host_segment (or host_regex) at the same time. Because the host_segment uses (why ever on earth i don't know) the source string. Host segment defines the "7" position as the host variable. So if i define the source by myselft host_segment will fail. Is it possibile to have a manually created source field and a dynamically generated host field? I could do this by creating a new props.conf and transforms.conf to manipulate the source segment. But i do not want this to be generally done... There are a few logs for which i do not want that.