Hello,
I was wondering if it is possible to locate or search in Splunk if a specific lookup table is being used in a dashboard, alert, saved search, report etc. Thank you for your help!
I have a dashboard with three dropdown inputs. The first is Date Range and it has a default value of last 24 hours. The dashboard does the initial search fine, but when I change the date range, via...
See more...
I have a dashboard with three dropdown inputs. The first is Date Range and it has a default value of last 24 hours. The dashboard does the initial search fine, but when I change the date range, via the presets in the dropdown, nothing updates
Code for the dropdown:
{
"type": "input.timerange",
"options": {
"token": "dateRange",
"defaultValue": "-24h@h,now"
},
"title": "Date Range"
}
Hello. In monitoring our application VCT and EURT we noticed that for all of Q3 the VCT was taking longer than EURT. Then all of a sudden it switched and now VCT is less than EURT. Seems to me that V...
See more...
Hello. In monitoring our application VCT and EURT we noticed that for all of Q3 the VCT was taking longer than EURT. Then all of a sudden it switched and now VCT is less than EURT. Seems to me that VCT should almost always be short than EURT. Is this true? Does this sound like configuration issue that was corrected? If so, should I consider the EURT as the VCT for Q3?
We are in the process of a full hardware upgrade of all our indexers in our distributed environment. We have three standalone search heads connected to a cluster of many indexers. In the process, we ...
See more...
We are in the process of a full hardware upgrade of all our indexers in our distributed environment. We have three standalone search heads connected to a cluster of many indexers. In the process, we are proceeding one at a time: 1. Loading up a new indexer 2. Integrating it into the cluster 3. Taking an old indexer offline, enforcing counts When the decommissioning process finishes and the old indexers are gracefully shutdown, we have an alert that appears on our search heads in the Splunk Health Report: "The search head lost connection to the following peers: <decommissioned peer>. If there are unstable peers, confirm that the timeout (connectionTimeout and authTokenConnectionTimeout) settings in distsearch.conf are at appropriate values." I cannot figure out why we are seeing this alert. My conclusion is that we must be missing a step somewhere. To decommission a server, we do the following: 1. On the indexer: splunk offline enforce-counts 2. On the cluster master: splunk remove cluster-peers <GUID> 3. On the indexer: Completely uninstall Splunk. 3. On the cluster master: Rebalance indexes. We have also tried reloading the health.conf configuration by running '|rest /services/configs/conf-health.conf/_reload' on the search heads, to no effect. We cannot figure out where the health report is retaining this old data from, and the _internal logs clearly show that the moment of the GracefulShutdown transition on the Cluster Master is where the PeriodicHealthReporter component on the Search Heads begins to alert. The indexers in question are no longer listed as search peers on the search heads, and they're not listed as search peers on the cluster master either. The monitoring console looks fine. What could we be missing?
Hello all,
We are wanting to enrich events as they become notables in ES before they are sent onto Mission control. Thoughts being, enrich the event via some sort of search ( all the data will be ...
See more...
Hello all,
We are wanting to enrich events as they become notables in ES before they are sent onto Mission control. Thoughts being, enrich the event via some sort of search ( all the data will be in splunk already) to add , DNS, DHCP, Threat intel and some endpoint data.
Is it possible to have a search run for the notable index to gather information from other indexes and add them to the notable event? If so I would love to discuss.
Dear Support I have downloaded Splunk Add-on for Sysmon. I am also using Sysmon App for Splunk - which requires the prior. My sysmon data are stored on an index named os_sysmon. Some dashbo...
See more...
Dear Support I have downloaded Splunk Add-on for Sysmon. I am also using Sysmon App for Splunk - which requires the prior. My sysmon data are stored on an index named os_sysmon. Some dashboards of Sysmon App for Splunk show empty, because they rely on a field named EventDescription. I did check deployment of Splunk Add-on for Sysmon, under folder lookups, and did find there a file named microsoft_sysmon_eventcode.csv just as doc: Lookups for the Splunk Add-on for Sysmon ... says The file is populated with 28 entries and has two fields: EventCode and EventDescription when I search my index: index = os_sysmon I do get field EventCode, but not the EventDescription (the same is for lookup file microsoft_sysmon_record_type.csv - I do have record_type but not the record_type_name) Now, the Sysmon App for Splunk has only one macro - named sysmon - with an original sourcetype=....., which I changed to index=sysmon. No try to derivate any EventDescription field from EventCode via the lookup file. Seems strange that developers of Sysmon App for Splunk forgot to create (eval) used field EventDescription from EventCode (via lookup) in their only macro. Should I do it myself there, or is it something to fix at Splunk Add-on for Sysmon - and how? best regards Altin
I have found a search in the charge back application that might fit for seeing the SVC's by index. Unfortunately that's how my company manages costs, by index. The search is good, but I'm still hav...
See more...
I have found a search in the charge back application that might fit for seeing the SVC's by index. Unfortunately that's how my company manages costs, by index. The search is good, but I'm still having issues getting just the SVC's and index as my return: I did modify it from one day to 1 month, but I only want it to bring back for one month and thus have only one line of results. Any help would be appreciated.
index=summary source="splunk-ingestion"
| `sim_filter_stack(myimplementation)`
| dedup keepempty=t _time idx st
| stats sum(ingestion_gb) as ingestion_gb by _time idx
| eventstats sum(ingestion_gb) as total_gb by _time
| eval pct=ingestion_gb/total_gb
| bin _time span=1m
| join _time
[ search index=summary source="splunk-svc-consumer" svc_consumer="data services" svc_usage=*
| fillnull value="" svc_consumer process_type search_provenances search_type search_app search_label search_user unified_sid search_modes labels search_head_names usage_source
| eval unified_sid=if(unified_sid="",usage_source,unified_sid)
| stats max(svc_usage) as utilized_svc by _time svc_consumer search_type search_app search_label search_user search_head_names unified_sid process_type
| timechart span=1m sum(utilized_svc) as svc_usage ]
| eval svc_usage=svc_usage*pct
| timechart useother=false span=1m sum(svc_usage) by idx limit=200
Hi,
What's the best way to only do a Lookup based on the results of the main search? I want to only run this when 2 fields don't match. Pseudo would be
If field1!=field2 THEN | lookup accounts ...
See more...
Hi,
What's the best way to only do a Lookup based on the results of the main search? I want to only run this when 2 fields don't match. Pseudo would be
If field1!=field2 THEN | lookup accounts department as field2 OUTPUT
So like an if then statement most programming languages allow
Thanks
Lee
Hi, quick summary of our deployment: - Splunk standalone 9.0.6 - PaloAlto Add-on and App freshly installed 8.1.0 - SC4S v3.4.4 sending logs to splunk - PA logs ingested in indexes and sourcetyp...
See more...
Hi, quick summary of our deployment: - Splunk standalone 9.0.6 - PaloAlto Add-on and App freshly installed 8.1.0 - SC4S v3.4.4 sending logs to splunk - PA logs ingested in indexes and sourcetypes according SC4S official doc https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/PaloaltoNetworks/panos/ - I see events in all indexes and with all sourcetypes. Indexes: netfw, netproxy, netauth, netops Sourcetypes: pan:traffic , pan:threat , pan:userid, pan:system, pan:globalprotect, pan:config What else do I need to do to make the official PaloAlto App to work? I checked the documentation https://pan.dev/splunk/docs/installation/ and I enable the data acceleration, and still no data is shown in any dashboard. I don't know what else is missing, any suggestion? thanks a lot
If I have a multisite architecture with site A and site B, can they live on different cloud environments and still have index replication? For example, if I have site A components on Azure but site B...
See more...
If I have a multisite architecture with site A and site B, can they live on different cloud environments and still have index replication? For example, if I have site A components on Azure but site B is on AWS, can I still utilize index clustering across the two sites for replication?
My query returns many events, each event is in a form of a json i.e. { "key1": "val1", "key2":"val2"} I would like to convert all events to one event that contains all the original events using sha2...
See more...
My query returns many events, each event is in a form of a json i.e. { "key1": "val1", "key2":"val2"} I would like to convert all events to one event that contains all the original events using sha256 of the original event as the key so the new json file will look like: {
sha256a: { "key1": "val1", "key2":"val2"}, sha256b: { "key1": "val1a", "key2":"val2a"},
}
where sha256a is from | eval sha256a=sha256({ "key1": "val1", "key2":"val2"})
Hello everyone, I am trying to enable some basic detections that found from the Splunk Security Essentials app. We do have ES however; we are still in the process to getting all of our data CIM comp...
See more...
Hello everyone, I am trying to enable some basic detections that found from the Splunk Security Essentials app. We do have ES however; we are still in the process to getting all of our data CIM complaint. Do alerts from the Splunk Security Essentials app need to be map to to ES using the "add mapping " option? or do these basic alerts have an equivalent in the ES content management use cases tab?
Hi, while using Splunk SOAR we have several Apps for several integrations with Azure/Graph. Examples of such apps are: Microsoft 365 Defender, MS Graph for Sharepoint, etc. However, most of such ap...
See more...
Hi, while using Splunk SOAR we have several Apps for several integrations with Azure/Graph. Examples of such apps are: Microsoft 365 Defender, MS Graph for Sharepoint, etc. However, most of such apps have limited functionalities (i.e. thay do not have an action for all the possibile APIs that can be used). Hence, in order to use other APIs (not available through the standards Apps) we thought to configure the HTTP app with Graph (where we already have an app registration and several permissions - done via Azure). However when we configure the client_id and the secret_id along with the other parameters we receive the following answer from the app: This is the asset configuration: Does anyone know what's wrong with my configuration? Did anyone make it to work? Thank you in advance!
Hi,
I have submitted my app into Splunk Cloud Platform for vetting process and it is in "Pending" status for more then 3 weeks. Is this timespan normal? Is there anyway to contact the team and c...
See more...
Hi,
I have submitted my app into Splunk Cloud Platform for vetting process and it is in "Pending" status for more then 3 weeks. Is this timespan normal? Is there anyway to contact the team and check for a more specific status? Thanks
I have data like provided below: field A Field B Field C Field D abc.com 1 1 AB CD 1 1 xyz.com 2 2 AB CD 1 1 abc.com 1 1 AB CD 1 1 xyz.com ...
See more...
I have data like provided below: field A Field B Field C Field D abc.com 1 1 AB CD 1 1 xyz.com 2 2 AB CD 1 1 abc.com 1 1 AB CD 1 1 xyz.com 2 2 AB CD 1 1 def.com 1 AB CD 0 I want to group Field A values such that all abc.com value come in 1 row with associated count. I want output like field A count Field B Field C Field D abc.com 2 1 1 AB CD 1 1 xyz.com 2 2 2 AB CD 1 1 def.com 1 1 AB CD 0 if I take path of stats count then it split field C and D which I don't want, I want them to be uniquely compared as a group value. looking for suggestions. Thanks in advance.
Hi, I am sending logs without indexing on Splunk to another product by using the "SYSLOG_ROUTING" DEST_KEY on the transform.conf file. Looking at the documentation of "How Splunk licensing works", ...
See more...
Hi, I am sending logs without indexing on Splunk to another product by using the "SYSLOG_ROUTING" DEST_KEY on the transform.conf file. Looking at the documentation of "How Splunk licensing works", it says: "When ingesting event data, the measured data volume is based on the raw data that is placed into the indexing pipeline." By looking on the monitor console I realized that the indexer pipeline is made by: syslog out, tcp out and indexer lines, so it seems that by using syslog_routing dest key I could also consume Splunk license. Can you confirm this? Kind Regards, Angelo are those
Hey All I've configured tcp-ssl on HF, created certificates and the following configuration. The HF receive syslog from third-party, I'll send the third party company the CA (combined certificat...
See more...
Hey All I've configured tcp-ssl on HF, created certificates and the following configuration. The HF receive syslog from third-party, I'll send the third party company the CA (combined certificat) I created based on these docs: 1. How to create and sign your own TLS certificates 2. Create a single combined certificate file inputs.conf [tcp-ssl://2222] index = test sourcetype = st_test [SSL] serverCert = C:\Program Files\Splunk\etc\auth\mycerts\myServerCertificate.pem sslPassword = <Server.key password> sslRootCAPath = C:\Program Files\Splunk\etc\auth\mycerts\myCertAuthCertificate.pem Server.conf [sslconfig] sslPassword = <password encrypted that I didn't configured> And yet Splunk isn't listening to the requested port for example 2222 What am I missing? The error I get in Splunk _internal is: SSL context not found. Will not open raw (SSL) IPv4 port 2222 Please assist, and Thank YOU!!!
The app "Splunk App for Fraud Analytics" introduced that we "can download and install test data from here. Please consider that using test data can use up to 7 GB and will take 10-30 minutes for the ...
See more...
The app "Splunk App for Fraud Analytics" introduced that we "can download and install test data from here. Please consider that using test data can use up to 7 GB and will take 10-30 minutes for the test data to initialize correctly". But I did not find any test data attached.