All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunkers I'm trying to send alerts data from one index to another using a macro For ex: The macro is having 4 arguments like below and would like to send data to new index called "newidx" usi... See more...
Hi Splunkers I'm trying to send alerts data from one index to another using a macro For ex: The macro is having 4 arguments like below and would like to send data to new index called "newidx" using collect command here is the macro called `newmacro` eval apple=xyz, banana=abc, mango=www, grape=123 | collect index=newidx the idea is wherever I reference this macro in an alert that exact alert raw data need to be copied to the newidx but the sourcetype always changes as stash instead of original. I don't see all original fields in summary index Is there any way to define a sourcetype something like |collect index=newidx sourcetype=$sourcetype$
Dear All, Unable to send data from universal forwarder, to Splunk Enterprise i have minimal knowledge in Splunk  I'm trying to configure universal forwarder but unable to success could you please he... See more...
Dear All, Unable to send data from universal forwarder, to Splunk Enterprise i have minimal knowledge in Splunk  I'm trying to configure universal forwarder but unable to success could you please help me in this regards Please find the below my configurations i am using Splunk Enterprise 9.0 and universal forwarder version 9.1.1using Cent OS 7.0 inputs.conf [root@Universalforwarders local]# [monitor:///var/log/messages] index=os disabled=0 outputs.conf [root@Universalforwarders local]# [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] [server://192.168.122.1:9997]   i used following command to check port status: netstat -an | grep 9997 tcp     0     0   0.0.0.0:9997   0.0.0.0:*   LISTEN localhost.localdomain --- my Splunk enterprise instance 127.0.0.1 --- my Splunk Universal forwarder   i want to know where i am doing mistake would be appreciate your kind support thanks in advance            
Hello, I have a lookup where all the hostnames are available under the field called "title" with respect to teams.I would like to set up an alert for team "abc" if any of the host stops reporting fo... See more...
Hello, I have a lookup where all the hostnames are available under the field called "title" with respect to teams.I would like to set up an alert for team "abc" if any of the host stops reporting for more than 15 mins, I tried the below search but unable to get the results.Can anyone please help me with the search, it is highly helpful. Search which i am using: | inputlookup 123.csv | search team="abc" | table title | rename title as host | appendpipe [ | stats count as islookupcount ] | eval current_time =now() | eval islookupcount = coalesce(islookupcount, 0) | search islookupcount = 0 | eventstats latest(_time) as last_event_time by host | where current_time - last_event_time > 900 | eval stopped_sending_time=strftime(current_time,"%Y-%m-%d %H:%M:%S") | table unit_id, host, stopped_sending_time please help me with the better search for my usecase may be i am not using the right one. Thanks
Embed in the dashboard
I am trying to create a dashboard panel that will have dropdowns different by the row you select.  I am using one of the searches that comes with the monitoring application as my search: index=_inte... See more...
I am trying to create a dashboard panel that will have dropdowns different by the row you select.  I am using one of the searches that comes with the monitoring application as my search: index=_internal sourcetype=splunkd TERM(group=tcpin_connections) TERM("cooked") OR TERM("cookedSSL") (hostname!=*.splunk*.*) | dedup hostname | stats c as fwdCount by version | rex field=version "^(?<fwdV>\d+.\d+)" | eval splV= [ | makeresults | eval VERSION=7.0 | append [ | rest splunk_server=local count=1 /services/server/info | stats max(version) as VERSION] | rex field=VERSION "^(?<version>\d+.\d+)" | stats max(version) as splV | return $$splV ] | eval fwd_7_3_eos=relative_time(strptime("22-Oct-2021", "%d-%b-%Y"), "+1d@d"), fwd_8_0_eos=relative_time(strptime("22-Oct-2021", "%d-%b-%Y"), "+1d@d"), fwd_8_1_eos=relative_time(strptime("19-Apr-2023", "%d-%b-%Y"), "+1d@d"), fwd_8_2_eos=relative_time(strptime("30-Sep-2023", "%d-%b-%Y"), "+1d@d"), fwd_9_0_eos=relative_time(strptime("14-Jun-2024", "%d-%b-%Y"), "+1d@d"), fwd_9_1_eos=relative_time(strptime("28-Jun-2025", "%d-%b-%Y"), "+1d@d"), fwd_default_eos=relative_time(strptime("01-Jan-1971", "%d-%b-%Y"), "+1d@d") | eval expTimestamp = case( match($$fwd_version$$, "^7\.3"), fwd_7_3_eos, match($$fwd_version$$, "^8\.0"), fwd_8_0_eos, match($$fwd_version$$, "^8\.1"), fwd_8_1_eos, match($$fwd_version$$, "^8\.2"), fwd_8_2_eos, match($$fwd_version$$, "^9\.0"), fwd_9_0_eos, match($$fwd_version$$, "^9\.1"), fwd_9_1_eos, 1==1, fwd_default_eos) | fields - fwd_*_eos | eval warn=case( (now() > expTimestamp), fwdCount, 1==1, 0) | eval info=fwdCount-warn | rename warn as "Out of date", info as "Up to date" | fields - fwdV, splV, fwdCount, expTimestamp   What I want to do is to drop down based on the row I select (see attached snapshot)
@PickleRick i have lots of microservice that work together.   when user search on my product log something like this that show flow of what modules processing user request: e.x front-end > search ... See more...
@PickleRick i have lots of microservice that work together.   when user search on my product log something like this that show flow of what modules processing user request: e.x front-end > search > db > report  
Haven't been able to find this, but I want to basically calculate up time percentage for a host based on 2 unique events.  One gets logged when something is bad, the other gets logged when everything... See more...
Haven't been able to find this, but I want to basically calculate up time percentage for a host based on 2 unique events.  One gets logged when something is bad, the other gets logged when everything is fine.   An example would be to have a host log 10 minutes of "ok" events, then 4 minutes of "bad" events, then 18 minutes of "on" events, etc. I need to out put the following based on the search range of the query. Host | total_ok_duration | total_bad_duration | percentage_ok_duration this need to be run and return for multiple hosts as well.
That's a tough problem. It's not a Splunk-tough problem but a generally tough problem. in order to find the matches... you need to do the comparisons. And that's the biggest problem here. Since you ... See more...
That's a tough problem. It's not a Splunk-tough problem but a generally tough problem. in order to find the matches... you need to do the comparisons. And that's the biggest problem here. Since you don't have a fixed field which you want to look up but want to use the lookup as a list of patterns to match against your whole raw event (at least that's how I interpret your requirement), you have to do m*n "searches" against your data where m is the number of your events and n is the number of distinct values in your lookup. If you know you can split the events into separate words, that might make it a bit easier because you don't have to match your raw event against terms from the lookup but rather do a lookup with the words from the event (which could be marginally faster since it's more probable than you'll match something before reaching the end of the lookup). There are several possible approaches here but I'm not sure which one would be fastest given the size of your data. The more events you have to match, the more it's tempting to create something matching cleverly over a sorted list of your terms from the lookup. (to make things a bit more complicated one has to remember that each "comparison" is also not an atomic operation but also depends on the length of the strings and the match ratio).
Hello, I am trying to change the color of the sparkline in my table. I don't have the option to change it using the dropdown menu under "Column-specific formatting" so I'm trying to do it through ... See more...
Hello, I am trying to change the color of the sparkline in my table. I don't have the option to change it using the dropdown menu under "Column-specific formatting" so I'm trying to do it through he Code option. Here is the current code I have:  { "type": "splunk.table", "dataSources": { "primary": "ds_xNY7uyLU" }, "title": "Top Notable Sources", "options": { "fontSize": "extraSmall", "columnFormat": { "sparkline": { "data": "> table | seriesByName(\"sparkline\") | formatByType(sparklineColumnFormatEditorConfig)", "sparklineColors": "> table | seriesByName(\"sparkline\") | matchValue(sparklineColorsEditorConfig)" } } }, "context": { "sparklineColorsEditorConfig": [ { "string": {}, "value": "#66aaf9" } ] }, "showProgressBar": false, "showLastUpdated": false }  
What do you mean by "patterns"? The answer will greatly depend on how you define it. Because depending on your needs, you can just flatten the module list to a string and do a summary which string ha... See more...
What do you mean by "patterns"? The answer will greatly depend on how you define it. Because depending on your needs, you can just flatten the module list to a string and do a summary which string happens most often or you can try some othe techniques ending with using MLTK app.
This is something you typically do in the search-head layer. It has nothing to do with HEC. And you're mixing different things here - EVAL-* entries belong directly in props.conf, not in transforms.... See more...
This is something you typically do in the search-head layer. It has nothing to do with HEC. And you're mixing different things here - EVAL-* entries belong directly in props.conf, not in transforms.conf stanza. And again - if you have a bigger environment than an all-in-one setup, this goes into the search-head tier.
Hello Splunkers, I’m looking for the best algorithm to search for events. with the below criteria. I have a lookup with only one field but multi-valued. About 10000 lines, for example, “vatsal, ja... See more...
Hello Splunkers, I’m looking for the best algorithm to search for events. with the below criteria. I have a lookup with only one field but multi-valued. About 10000 lines, for example, “vatsal, jagani” “10.0.0.1,“10.0.0.2” I want to search index=abc, for the last 2 hours (about 50 events) to see if there are at least two events (but can be more) that contain words from one set.   For example. event-1 - “hello, I’m Vatsal. event-2 - “hello, I’m jagani too.” here, two events have matching words from the same lookup field.   Another example, event-3 - “hi, vatsal” event-4 - “hello, vatsal” this also considers matching.   And I want to run this alert every hour.   Solution-1 - I could use the map command as below but I don't think that's very efficient.   | inputlookup words_lookup.py | eval or_field = <convert words to or list like "vatsal" OR "jagani"> | map max_count=1000000 "search index=abc $or_field$"     Solution-2 - I could write a Python script, but I'm not sure what algorithm to use.   I'm looking for a more efficient query or python algorithm to do this efficiently.
@PickleRick You right, after several workarounds finally figure out how extract list of modules, and solved all challenges.   Now i have list of modules like this (groupby them by id): Txn1 16:30... See more...
@PickleRick You right, after several workarounds finally figure out how extract list of modules, and solved all challenges.   Now i have list of modules like this (groupby them by id): Txn1 16:30:53:002 moduleA 16:30:54:002 moduleA 16:30:55:002 moduleB 16:30:56:002 moduleC 16:30:57:002 moduleD 16:30:58:002 moduleE 16:30:59:002 moduleF 16:30:60:002 moduleZ Txn2 16:30:54:002 moduleD 16:30:55:002 moduleE 16:30:56:002 moduleY   how can i use splunk to find patterns(flow) of modules? Find most patterns and rare patterns?
I have lookup table in splunk. I want check if ever been update in Splunk using output lookup command
This is an old post but I want to post resolutions that worked for us in case someone else runs into the same error. You'll usually see these bundle replication errors with the search below (you'll ... See more...
This is an old post but I want to post resolutions that worked for us in case someone else runs into the same error. You'll usually see these bundle replication errors with the search below (you'll need to edit the search with your search head and indexer hostnames - wildcard it if you want): Note: the Monitoring Console app has a dashboard for these type of errors in Search > Knowledge Bundle Replication   index=_internal host IN (<YOUR_SH_HOSTNAME>, <YOUR_INDEXER_HOSTNAME>) source=*splunkd.log* (component=BundlesAdminHandler OR component=BundleDataProcessor OR component=BundleDeltaHandler OR component=BundleReplicationProvider OR component=BundleStatusManager OR component=BundleTransaction OR component=CascadePlan OR component=CascadeReplicationReaper OR component=CascadingBundleReplicationProvider OR component=CascadingReplicationManager OR component=CascadingReplicationTransaction OR component=CascadingReplicationStatusActor OR component=CascadingUploadHandler OR component=ClassicBundleReplicationProvider OR component=DistBundleRestHandler OR component=DistributedBundleReplicationManager OR component=GetCascadingReplicationStatusTransaction OR component=RFSManager OR component=RFSBundleReplicationProvider) (log_level=WARN OR log_level=ERROR) component=ClassicBundleReplicationProvider log_level=ERROR   In the error logs, note down the search head reporting the errors and the indexers listed in logs. Verify that the search head can connect to the indexers listed in the error log. The second resolution is log into the search head that is reporting the error and check the timestamp of the content inside $SPLUNK_HOME/var/run/proxy_bundles (IE Linux command: ls -lah). If the timestamp of the files are more than a few days ago, then you would need to move the proxy_bundles directory to a backup location and restart Splunk; this should fix the errors.
Hi, I am using Splunk 9.0.6, and I configured HEC + Syslog Connector for Splunk for the data ingestion. At the moment, I receive events from our two different firewall (PaloAlto and Stormshield). M... See more...
Hi, I am using Splunk 9.0.6, and I configured HEC + Syslog Connector for Splunk for the data ingestion. At the moment, I receive events from our two different firewall (PaloAlto and Stormshield). My problem arises with the fact that Stormshield is not directly supported by SC4S, so the extracted fields are not CIM compliant. More precisely, the field action should contain blocked or allowed as possible values, but it contains pass and block instead. My question is how it would be the best way to implement this transformation. I tried creating the following files in the path  C:\Program Files\Splunk\etc\apps\splunk_httpinput\local props.conf [StormShield:StormShield] TRANSFORMS = rewriteaction transform.conf [rewriteaction] EVAL-action = case(action="pass", "allowed", action="block", "blocked" , 1=1, "UNKNOWN") I restarted Splunk, but nothing really happened. Any idea of what I am doing wrong?  Many thanks.  
It's actually worse.  Splunk doesn't allow you to set the wec_event_format to RenderedText if the channel name doesn't start with ForwardedEvents. 10-20-2023 12:49:20.893 +0200 ERROR ExecProcessor [... See more...
It's actually worse.  Splunk doesn't allow you to set the wec_event_format to RenderedText if the channel name doesn't start with ForwardedEvents. 10-20-2023 12:49:20.893 +0200 ERROR ExecProcessor [6396 ExecProcessorSchedulerThread] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" WinEventCommonChannel - WinEventLogChannelBase::enumLocalWECSubscriptions: subscription:'Applocker' - Invalid WEC destination channel ACME-WEC-Workstations/Applocker for content format RenderedText. RenderedText format is supported only on ForwardedEvents or custom channels named ForwardedEvents-1, ForwardedEvents-2, etc.Consider creating custom channels as the destination log, or change the content format of the subscription to "Events". See the description for the 'wec_event_format' setting at $SPLUNK_HOME/etc/system/README/inputs.conf.spec for more details. Also you can't set wec_event_format as 'Events' for ForwardedEvents channel and forget about having mixed events in the same channel. It's amazing how such a breaking change was introduced under the carpet.
DensityFunction and AutoAnomalyDetection are vastly different algorithms, so different results are to be expected. See Developing the Splunk App for Anomaly Detection | Splunk for more info on the An... See more...
DensityFunction and AutoAnomalyDetection are vastly different algorithms, so different results are to be expected. See Developing the Splunk App for Anomaly Detection | Splunk for more info on the Anomaly Detection App's custom algorithm and Algorithms in the Machine Learning Toolkit - Splunk Documentation for the MLTK's DensityFunction. At least in my testing, the ADESCA/Earthgecko-Skyline stack in the Anomaly Detection App is more prone to alerting on non-cyclical low values when compared to the boundaries generated by the DensityFunction, though I have no good explanation for this behavior as of right now. 
There is no other portion, running the same search as in your screenshot I get the error.