All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

An example is attached.  The first line is what I want, but I get a whole bunch of _time lines and I only want the summed up line (previous month total) shown above.  I need to get these numbers for... See more...
An example is attached.  The first line is what I want, but I get a whole bunch of _time lines and I only want the summed up line (previous month total) shown above.  I need to get these numbers for the previous month for our pricing application.  I actually got most of this from the Charge Back application, but have been fiddling with it to get what I need out of it.
We are in the process of a full hardware upgrade of all our indexers in our distributed environment. We have three standalone search heads connected to a cluster of many indexers. In the process, we ... See more...
We are in the process of a full hardware upgrade of all our indexers in our distributed environment. We have three standalone search heads connected to a cluster of many indexers. In the process, we are proceeding one at a time: 1. Loading up a new indexer 2. Integrating it into the cluster 3. Taking an old indexer offline, enforcing counts When the decommissioning process finishes and the old indexers are gracefully shutdown, we have an alert that appears on our search heads in the Splunk Health Report: "The search head lost connection to the following peers: <decommissioned peer>.  If there are unstable peers, confirm that the timeout (connectionTimeout and authTokenConnectionTimeout) settings in distsearch.conf are at appropriate values." I cannot figure out why we are seeing this alert. My conclusion is that we must be missing a step somewhere. To decommission a server, we do the following: 1. On the indexer: splunk offline enforce-counts 2. On the cluster master: splunk remove cluster-peers <GUID> 3. On the indexer: Completely uninstall Splunk. 3. On the cluster master: Rebalance indexes. We have also tried reloading the health.conf configuration by running '|rest /services/configs/conf-health.conf/_reload' on the search heads, to no effect. We cannot figure out where the health report is retaining this old data from, and the _internal logs clearly show that the moment of the GracefulShutdown transition on the Cluster Master is where the PeriodicHealthReporter component on the Search Heads begins to alert. The indexers in question are no longer listed as search peers on the search heads, and they're not listed as search peers on the cluster master either. The monitoring console looks fine. What could we be missing?
Hello all,  We are wanting to enrich events as they become notables in ES before they are sent onto Mission control. Thoughts being, enrich the event via some sort of search ( all the data will be ... See more...
Hello all,  We are wanting to enrich events as they become notables in ES before they are sent onto Mission control. Thoughts being, enrich the event via some sort of search ( all the data will be in splunk already) to add , DNS, DHCP, Threat intel and some endpoint data.     Is it possible to have a search run for the notable index to gather information from other indexes and add them to the notable event?  If so I would love to discuss.    
You cannot do a conditional lookup, but you could do the lookup across all the data and then only conditionally display the data that was looked up.
Please post mocked up examples of the data you have in Splunk and what you would like the report to look like. We don't know anything about your  data so it's hard to know what you want just based on... See more...
Please post mocked up examples of the data you have in Splunk and what you would like the report to look like. We don't know anything about your  data so it's hard to know what you want just based on an SPL query.
Dear Support I have downloaded Splunk Add-on for Sysmon. I am also using Sysmon App for Splunk - which requires the prior. My sysmon data are stored on an index named os_sysmon. Some dashbo... See more...
Dear Support I have downloaded Splunk Add-on for Sysmon. I am also using Sysmon App for Splunk - which requires the prior. My sysmon data are stored on an index named os_sysmon. Some dashboards of Sysmon App for Splunk show empty, because they rely on a field named EventDescription. I did check deployment of Splunk Add-on for Sysmon, under folder lookups, and did find there a file named microsoft_sysmon_eventcode.csv just as doc: Lookups for the Splunk Add-on for Sysmon  ... says The file is populated with 28 entries and has two fields: EventCode and EventDescription  when I search my index: index = os_sysmon I do get field EventCode, but not the EventDescription  (the same is for lookup file microsoft_sysmon_record_type.csv - I do have record_type but not the record_type_name) Now,  the Sysmon App for Splunk has only one macro - named sysmon - with an original sourcetype=....., which I changed to index=sysmon. No try to derivate any EventDescription  field from EventCode via the lookup file. Seems strange that developers of Sysmon App for Splunk forgot to create (eval) used field EventDescription  from EventCode (via lookup) in their only macro. Should I do it myself there, or is it something to fix at Splunk Add-on for Sysmon - and how? best regards Altin
That is not normal, but not unheard of.  It depends on how busy the staff is.  Your Splunk account team should be able to find out why the vetting process is taking so long.
I suspect you are right, but you probably should post a separate question about that.
What is your question?  What have you tried so far and how did those efforts not meet expectations? Have you looked at the JSON functions in the Search Reference Manual?
I have found a search in the charge back application that might fit for seeing the SVC's by index.  Unfortunately that's how my company manages costs, by index.  The search is good, but I'm still hav... See more...
I have found a search in the charge back application that might fit for seeing the SVC's by index.  Unfortunately that's how my company manages costs, by index.  The search is good, but I'm still having issues getting just the SVC's and index as my return:  I did modify it from one day to 1 month, but I only want it to bring back for one month and thus have only one line of results.   Any help would be appreciated. index=summary source="splunk-ingestion" | `sim_filter_stack(myimplementation)` | dedup keepempty=t _time idx st | stats sum(ingestion_gb) as ingestion_gb by _time idx | eventstats sum(ingestion_gb) as total_gb by _time | eval pct=ingestion_gb/total_gb | bin _time span=1m | join _time [ search index=summary source="splunk-svc-consumer" svc_consumer="data services" svc_usage=* | fillnull value="" svc_consumer process_type search_provenances search_type search_app search_label search_user unified_sid search_modes labels search_head_names usage_source | eval unified_sid=if(unified_sid="",usage_source,unified_sid) | stats max(svc_usage) as utilized_svc by _time svc_consumer search_type search_app search_label search_user search_head_names unified_sid process_type | timechart span=1m sum(utilized_svc) as svc_usage ] | eval svc_usage=svc_usage*pct | timechart useother=false span=1m sum(svc_usage) by idx limit=200
Hi, What's the best way to only do a Lookup based on the results of the main search?  I want to only run this when 2 fields don't match.  Pseudo would be If field1!=field2 THEN | lookup accounts ... See more...
Hi, What's the best way to only do a Lookup based on the results of the main search?  I want to only run this when 2 fields don't match.  Pseudo would be If field1!=field2 THEN | lookup accounts department as field2 OUTPUT So like an if then statement most programming languages allow Thanks Lee
"where sha256a is from | eval sha256a=sha256({ "key1": "val1", "key2":"val2"})" What are you saying in the above statement? Do you want the events to be sha256 encoded? That's not what you put in yo... See more...
"where sha256a is from | eval sha256a=sha256({ "key1": "val1", "key2":"val2"})" What are you saying in the above statement? Do you want the events to be sha256 encoded? That's not what you put in you example so that part is a bit confusing. The first event in your combined json starts with sha256a and the second sha256b. Should the next be sha256c? Please post example events and an example of what you would like them transformed into.
@VatsalJagani  thanks for your message, i just checked the automatic lookups and i don't see one created for test.csv, Am i missing something? Do i need to check somewhere else? please help me
Hi,  quick summary of our deployment: - Splunk standalone 9.0.6 - PaloAlto Add-on and App freshly installed 8.1.0 - SC4S v3.4.4 sending logs to splunk - PA logs ingested in indexes and sourcetyp... See more...
Hi,  quick summary of our deployment: - Splunk standalone 9.0.6 - PaloAlto Add-on and App freshly installed 8.1.0 - SC4S v3.4.4 sending logs to splunk - PA logs ingested in indexes and sourcetypes according SC4S official doc https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/PaloaltoNetworks/panos/ - I see events in all indexes and with all sourcetypes. Indexes: netfw, netproxy, netauth, netops Sourcetypes: pan:traffic , pan:threat , pan:userid, pan:system, pan:globalprotect, pan:config What else do I need to do to make the official PaloAlto App to work? I checked the documentation https://pan.dev/splunk/docs/installation/  and I enable the data acceleration, and still no data is shown in any dashboard. I don't know what else is missing, any suggestion? thanks a lot
Thanks @gcusello !
Hi @jwalrath1 , I don't think that this is a question for the Community: it requires a Splunk Professional Service Specialist. Anyway, if you can define an hostname and/or an IP to use for configur... See more...
Hi @jwalrath1 , I don't think that this is a question for the Community: it requires a Splunk Professional Service Specialist. Anyway, if you can define an hostname and/or an IP to use for configurations it should run, but I hint to ask to a Splunk PS. Ciao. Giuseppe
Thanks @richgalloway I have a second part to this question. Can I use the manager node to do a deployment to replicate configurations (dashboards and reports) saved on  site A to site B? Could this b... See more...
Thanks @richgalloway I have a second part to this question. Can I use the manager node to do a deployment to replicate configurations (dashboards and reports) saved on  site A to site B? Could this be done with the SHC deployer if I were to do a deployment on a weekly bases for example?
Hi @richgalloway  Thank you for the inputs I want to go with the modify my alert query by using look up file  Like I want to add the holidays dates in the Excel sheet and will upload to splunk Bu... See more...
Hi @richgalloway  Thank you for the inputs I want to go with the modify my alert query by using look up file  Like I want to add the holidays dates in the Excel sheet and will upload to splunk But I am not understanding how to frame a query with that now, below is my query Index=error-logs  status=401 |Stats count    Can you please help 
If I have a multisite architecture with site A and site B, can they live on different cloud environments and still have index replication? For example, if I have site A components on Azure but site B... See more...
If I have a multisite architecture with site A and site B, can they live on different cloud environments and still have index replication? For example, if I have site A components on Azure but site B is on AWS, can I still utilize index clustering across the two sites for replication?
My query returns many events, each event is in a form of a json i.e. { "key1": "val1", "key2":"val2"} I would like to convert all events to one event that contains all the original events using sha2... See more...
My query returns many events, each event is in a form of a json i.e. { "key1": "val1", "key2":"val2"} I would like to convert all events to one event that contains all the original events using sha256 of the original event as the key so the new json file will look like: { sha256a: { "key1": "val1", "key2":"val2"}, sha256b: { "key1": "val1a", "key2":"val2a"}, } where sha256a is from | eval sha256a=sha256({ "key1": "val1", "key2":"val2"})