All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have the query below and I'm trying to get the count of hosts affected by the vulnGrouping split by priority. Whereas currently the query return the total count for both combined.  The SPL is grou... See more...
I have the query below and I'm trying to get the count of hosts affected by the vulnGrouping split by priority. Whereas currently the query return the total count for both combined.  The SPL is grouping like software by a high level name (i.e., Adobe, Cisco Software, Oracle Software, etc.), then I have applied logic to determine the Risk level. Lastly getting a count of the IPv4 addresses affected.  | eval vulnGrouping=case(plugin_name like "Adobe%", "Adobe", plugin_name like "Google%", "Google Chrome", plugin_name like "Oracle%", "Oracle Software", plugin_name like "Cisco%", "Cisco Software") | stats values(priority) dc(ipv4) by vulnGrouping The output is similar to below: vulnGrouping values(priority) dc(ipv4) Adobe Critical High 100 Google Chrome Critical High 500   Where I'd like to be is something like this: vulnGrouping values(priority) dc(ipv4) Adobe Critical High 75 25 Google Chrome Critical High 150 350   Any ideas or help is greatly appreciated. 
I have events that look like this and I am using the field extractor    "timestamp": "2020-12-09T18:05:03.6664112Z", "scopeType": "organization", "scopeDisplayName": "1D (Organization)", "scope... See more...
I have events that look like this and I am using the field extractor    "timestamp": "2020-12-09T18:05:03.6664112Z", "scopeType": "organization", "scopeDisplayName": "1D (Organization)", "scopeId": "920941ec-025f-4d4c-9944-e7d357de7d94", "actionId": "Deleted", "data": { "ProjectName": "ATI Libs", "RepoId": "eb1e2a37-0833-462a-b3e6-031aa1d1f006", "RepoName": "libs-01" }, I tried to extract fields using both delimited option ":" as well as using regex.  When I use delimiter of "," it creates the first field 'timestamp' correctly but then lumps everything after that into a single field.   When I try to use regex to extract a field, for example I highlight the value "ATI Libs", I get this error:   "The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within longer text strings." Please advise, thanks.
I have been given this query to get data into dbconnect, it works perfectly fine for batch, but i want to run and get results for rising column by using EVENT_TIME from January 1st 2020. what should ... See more...
I have been given this query to get data into dbconnect, it works perfectly fine for batch, but i want to run and get results for rising column by using EVENT_TIME from January 1st 2020. what should be added to the below query, i tried adding AND EVENT_TIME > ? ORDER BY EVENT_TIME DESC and it gives java.sql.SQLDataException: ORA-01861: literal does not match format string. If i change EVENT_TIME to EVENT_NAME it works but i want rising column value for event time . Please help  SELECT sa.main_location AS main_location, sa.sub_location AS sub_location, sa.event_name AS event_name, sa.event_type AS event_type, to_char (sa.event_time, 'mm/dd/yyyy hh24:mi:ss') AS event_time, sa.entity_type AS entity_type, su.sys_user_id AS rac FROM jiva.security_audit_info sa LEFT JOIN jiva.sys_user su ON su.user_idn = sa.user_idn LEFT JOIN jiva.entity e ON e.entity_idn = su.entity_idn WHERE sa.event_name IN ( 'user_login', 'user_logout' )
Hi Splunkers, I am currently trying to create a dropdown box that will let you select a value, and then redirect you to a new dashboard. <input type="dropdown" token="OperatingSystems" searchWhenCh... See more...
Hi Splunkers, I am currently trying to create a dropdown box that will let you select a value, and then redirect you to a new dashboard. <input type="dropdown" token="OperatingSystems" searchWhenChanged="true"> <label>OS</label> <choice value="">Overview</choice> <choice value="Windows">Windows</choice> <choice value="Linux">Linux</choice> <choice value="MacOS">MacOS</choice> <change> <condition field="Overview"> <link target="_blank">app/VAA/Overview_aa1</link> </condition> <condition field="Windows"> <link target="_blank">app/VAA/Windows_aa1</link> </condition> <condition field="Linux"> <link target="_blank">app/VAA/Linux_aa1</link> </condition> <condition field="MacOS"> <link target="_blank">app/VAA/MacOS_aa1</link> </condition> </change> </input> When I change the value in the dropdown box nothing happens. Ideally, what I am trying to do is when a user changes the OS it takes them to that dashboard.   Any Suggestions?. Thank you, Marco
I'm doing some testing and figured out I need to run this in a savedsearch to extract the JSON field values.   index=dam sourcetype="imperva:dam" | eval dam_json=_raw | rex field=dam_json mode=se... See more...
I'm doing some testing and figured out I need to run this in a savedsearch to extract the JSON field values.   index=dam sourcetype="imperva:dam" | eval dam_json=_raw | rex field=dam_json mode=sed "s/^.* \{/{/g" | eval dam_json=replace(dam_json, "\\\\", "-") | spath input=dam_json   This removes the header "Dec 9 20:15:27 FQDN" and leaves the JSON between the {}.  When I try to use the saved search in a datamodel I get this error   In handler 'datamodeledit': Error in 'Imperva_DB_Audit': Dataset constraints must specify at least one index.   The Splunk version on my laptop is Splunk 8.1.0 (build f57c09e87251).  On the production system we are running Splunk 7.3.6 (build 47d8552a4d84) and an index isn't necessary since we have one datamodel with this as the constraints.   dlp_rule_severity="HIGH"   So two questions.  When did having an index become mandatory?  Is it possible to turn off the mandatory feature?  If not, we will have to go through our datamodels before we upgrade. TIA, Joe
I would like to use time range picker - advanced and create a formula that brings the last 4 business days I found some information about full business days week, but I couldn't customized it to my ... See more...
I would like to use time range picker - advanced and create a formula that brings the last 4 business days I found some information about full business days week, but I couldn't customized it to my case Is it possible? Or I need to use in another way our of time picker?
Hello everyone Has anyone or has found the list of all sourcetypes that Splunk handles? I need to find or make a document where the existing sourcetypes are and what datamodel it belongs to.   Th... See more...
Hello everyone Has anyone or has found the list of all sourcetypes that Splunk handles? I need to find or make a document where the existing sourcetypes are and what datamodel it belongs to.   Thanks
Hi, I am trying to link two  dashboards using drilldown option. If I drilldown on server A, it should link to dashboard X. If I drilldown on server B, it should link to dashboard Y. Please sugges... See more...
Hi, I am trying to link two  dashboards using drilldown option. If I drilldown on server A, it should link to dashboard X. If I drilldown on server B, it should link to dashboard Y. Please suggest how to achieve this.
Would anyone have an up to date way of looking at all indexes and if an index has not received any data in 60 minutes or so alert? I have seen several ways of looking at this by host but would prefer... See more...
Would anyone have an up to date way of looking at all indexes and if an index has not received any data in 60 minutes or so alert? I have seen several ways of looking at this by host but would prefer to look at it from the index level.   Thanks!! 
Hello all, we built a new cluster as we are getting out of space on current one and we are trying to reroute some of the ingestion to the new cluster by adding the new indexer clusters stanza in the... See more...
Hello all, we built a new cluster as we are getting out of space on current one and we are trying to reroute some of the ingestion to the new cluster by adding the new indexer clusters stanza in the outputs.conf and using _TCP_ROUTING setting in the inputs.conf the servers we want to reroute the ingestion.  below is the stanza we added in outputs.conf [tcpout:ABC_indexers] Server = xx.xx.xx.xx.xx:9997, xx.xx.xx.xx.xx:9997, xx.xx.xx.xx:9997 useACK = true in the inputs.conf we added below setting and pushed it to the servers we want to reroute the data and restarted the forwarder service: _TCP_ROUTING = ABC_indexers but we are not seeing any ingestion to the new cluster and we are getting few errors and warning. We checked that the forwarders are connected to all our new indexers over 9997 port. WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group ABC_indexers has been blocked for 800 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. "INFO ProxyConfig - Failed to initialize https_proxy from server.conf for splunkd. Please make sure that the https_proxy property is set as https_proxy=http://host:port in case HTTP proxying needs to be enabled." we checked everything on the indexers but could not find out what is blocking the indexers to receive the data. We have cluster master which is ingesting internal logs to this new indexers and that is not having any issue.  Please let me know if anyone got this issue and how you resolved it. Thanks
I work for a company with a clustered environment of around 9 indexers and 3 search heads, with a cluster master and deployment server + deployer. Recently none of our services or entities work toge... See more...
I work for a company with a clustered environment of around 9 indexers and 3 search heads, with a cluster master and deployment server + deployer. Recently none of our services or entities work together, well, the new ones don’t, all services created prior to the 26th Nov still work with their entities. Our entity list does not update with its linked service when an entity is linked. A duplicate of a service that is working, will not work either. We realised it was an issue with our refresh queue, mainly on entity_service jobs, what is the best way forward, to delete the queue? We would like to know what the issue is but for now all we have to go on is that the refresh queue has been increasing since 26th Nov.  Are we advised to refresh / delete the queue? Is this advised if the root of the problem is still unknown I have been trying to look and find what is causing this issue but to no avail.
Greetings Splunkers, I recently attended Splunk Fundamentals 3 and the instructor mentioned about a Splunk feature that automatically converts stats commands to tstats in order to try to improve p... See more...
Greetings Splunkers, I recently attended Splunk Fundamentals 3 and the instructor mentioned about a Splunk feature that automatically converts stats commands to tstats in order to try to improve performance.  The instructor did not go into details but recommended I post the question to the Splunk Community. Is there more information about this, i.e. the version this is or will be available in, etc?  Will/can we see the conversion happen in the Job Inspector or noop?   How does/will Splunk select which stats command to convert?   Is it only converting based on the metadata and indexed fields in the tsidx file OR will there be other criteria that tstats will convert? Thanks in advance for any insight or help with this.
Hi all, I'm trying to ingest (multiline) events with the string "public_ip" and remove the rest  props.conf: [public_ips] TRANSFORMS-removeallnonsense = remove_unneeded   transforms.conf [rem... See more...
Hi all, I'm trying to ingest (multiline) events with the string "public_ip" and remove the rest  props.conf: [public_ips] TRANSFORMS-removeallnonsense = remove_unneeded   transforms.conf [remove_unneeded] REGEX = (?m)^((?!public_ip).)*$ DEST_KEY = queue FORMAT = nullQueue   When i now try to ingest Data using HEC: curl -k http://localhost:8088/services/collector -H 'Authorization: Splunk 4f40e8ab-99a6-479f-ba13-7352feb11111' \ -d '{"sourcetype": "public_ips", "event":"foobar"}' is not indexed - fine.   curl -k http://localhost:8088/services/collector -H 'Authorization: Splunk 4f40e8ab-99a6-479f-ba13-7352feb11111' \ -d '{"sourcetype": "public_ips", "event":"foobar public_ip: 1.2.3.4 foobar1"}' this is indexed - fine when running:  curl -k http://localhost:8088/services/collector -H 'Authorization: Splunk 4f40e8ab-99a6-479f-ba13-7352feb11111' \ -d '{"sourcetype": "public_ips", "event":"foobar public_ip: 1.2.3.4 foobar\nline2"}' This is not indexed - not fine.  It seems i have a regex multiline issue i do not see.. Thanks for your help in advance, Andreas    
We have recently upgraded our non-prod Splunk Enterprise single instance environment, and have notice a couple errors when we load the analytics workspace. The errors says "Maximum call stack size ex... See more...
We have recently upgraded our non-prod Splunk Enterprise single instance environment, and have notice a couple errors when we load the analytics workspace. The errors says "Maximum call stack size exceeded." I am unsure which features of the Analytics Workspace this could be affecting, and I can't find any errors in Splunkd which could explain the cause of the errors.
Hi Everyone, I have subnet of IP's. whenever we see any traffic from that IP's we need alert but in between we have only few serves which is authorized for next one week(or mentioned time in lookup)... See more...
Hi Everyone, I have subnet of IP's. whenever we see any traffic from that IP's we need alert but in between we have only few serves which is authorized for next one week(or mentioned time in lookup). I have a lookup table for that having two fields  src====== date a.b.c.d----- epoc time(11-12-2020)   Now I want a end result that  any IP from that subnet(UAT Subnet) and  authorized servers access internet even after mentioned date in lookup table. (Please note that that authorized servers are also from that UAT subnet) create an alert.    
Hello   I have a windows index that has data as old as 14000+ days. From researching its because there is data that is way to far in the past and way to far into the future. Unfortunately this data... See more...
Hello   I have a windows index that has data as old as 14000+ days. From researching its because there is data that is way to far in the past and way to far into the future. Unfortunately this data wont roll due to the dates and is filling beyond where the index should fill.   How can I identify the buckets across the cluster that contain this data? I need to know the buckets so I can decide what to do with them.   Thanks!
Hi, I am installing a fresh copy of Splunk Enterprise version 8.1 on Linux on a new Linux server, I had an existing copy which is v6.4, may I know what do I have to take note of ? (eg packages that ... See more...
Hi, I am installing a fresh copy of Splunk Enterprise version 8.1 on Linux on a new Linux server, I had an existing copy which is v6.4, may I know what do I have to take note of ? (eg packages that I had to download before installing, searches and indexes that may not work in the new version)  
Hello, I am trying to create some fields at index time from an XML log. I prepared the sourcetype definition in the props.conf with the related TRANSFORM, and in the the transforms.conf I have the ... See more...
Hello, I am trying to create some fields at index time from an XML log. I prepared the sourcetype definition in the props.conf with the related TRANSFORM, and in the the transforms.conf I have the following:   [xmlkv_extract] REGEX=\<(.*?)\>(.*?)\< FORMAT = $1::$2 WRITE_META = true [xmlkv_extract_new] REGEX = <email>(.*?)<\/email><ccard>(.*?)<\/ccard><company>(.*?)<\/company><city>(.*?)<\/city> FORMAT = email::"$1" credit_card::"$2" company::"$3" city::"$4" WRITE_META = True    and this my sample event:   <email>orci.Phasellus.dapibus@egestasSed.ca</email><ccard>4539599637112700</ccard><city>Hamilton</city><company>Eros Proin LLC</company></fst>    Now, the problem is, if I use the first transform, only the email field is extracted (by the way I tried the regex in regex101 site and it worked with all the fields). If I use the second transform, everything is ok. Is there some limitation in the index-time field extraction about the "generic" xml tags extraction? thanks Fausto
  In the below table, I was to search by field "Core Content" where "Core Content" should take top 2 highest value.  Core Content Count Status Flag 4268 2223 N Red 4267 1794 N Yel... See more...
  In the below table, I was to search by field "Core Content" where "Core Content" should take top 2 highest value.  Core Content Count Status Flag 4268 2223 N Red 4267 1794 N Yellow 4266 305 Y Yellow 4265 90 Y Red 4268 19 Y Green 4263 63 N Green 4262 133 Y Red 4261 34 N Red 4260 26 N Yellow   4768       The output I expect is, Core Content Count Status Flag 4268 2223 N Red 4267 1794 N Yellow 4268 19 Y Green   All other rows I have to take as Outdated.   
Hi, is it possible to download v6 of splunk enterprise for linux?