All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I want to use Stream to forward DNS to Splunk but I am having trouble with the initial configuration. Info: - running Splunk Enterprise on an onprem Windows Server. DNS servers are Windows DCs.  -... See more...
I want to use Stream to forward DNS to Splunk but I am having trouble with the initial configuration. Info: - running Splunk Enterprise on an onprem Windows Server. DNS servers are Windows DCs.  - installed Stream app and add-on on Splunk Enterprise server, add-on is installed on Windows DCs Troubleshooting: - when I go into the Stream app, it runs the set up and I get an error: Unable to establish connection to /en-us/custom/splunk_app_stream/ping/: End of file. Note: I am able to ping splunk server from DNS server and port 8000 is open on the Splunk server firewall. - when I go into Configure Streams, DNS is enabled - on the DNS server, /etc/apps/Splunk_TA_stream/local/inputs.conf file contains splunk_stream_app_location = https://SPLUNK-SERVERNAME:8000/en-us/custom/splunk_app_stream/ - on the DNS server, /etc/apps/Splunk_TA_stream/default/streamsfwd.conf file contains [streamfwd] port = 8889 ipAddr = 127.0.0.1
I am trying to get a list of all services that are in APM. The APM usage report does not provide the name and only provides #of hosts. I need to know the name of all services that are in APM and be a... See more...
I am trying to get a list of all services that are in APM. The APM usage report does not provide the name and only provides #of hosts. I need to know the name of all services that are in APM and be able to export.
In the documentation <https://help.splunk.com/en/splunk-enterprise/manage-knowledge-objects/knowledge-management-manual/9.3/build-a-data-model/about-data-models>, there is written: Dataset constrain... See more...
In the documentation <https://help.splunk.com/en/splunk-enterprise/manage-knowledge-objects/knowledge-management-manual/9.3/build-a-data-model/about-data-models>, there is written: Dataset constraints determine the first part of the search through Simple search filters (Root event datasets and all child datasets). Complex search strings (Root search datasets). transaction definitions (Root transaction datasets). In my new data model I try to make a new dataset constraint which will try to select only unique field  eventId. EventId is a number, ie.123456. My goal is to drop duplicated log lines. Is it possible to define this kind of data set constraint?
@PrewinThomas , yes that the correct way. I was able to figure it out yesterday <form version="1.1" theme="dark"> <label>Health Log Source Analysis</label> <fieldset submitButton="false"></fi... See more...
@PrewinThomas , yes that the correct way. I was able to figure it out yesterday <form version="1.1" theme="dark"> <label>Health Log Source Analysis</label> <fieldset submitButton="false"></fieldset> <row> <panel> <input type="multiselect" token="selected_index" searchWhenChanged="true"> <label>Select Index(es)</label> <choice value="*">All</choice> <fieldForLabel>index</fieldForLabel> <fieldForValue>index</fieldForValue> <search> <query>| rest splunk_server=local /services/data/indexes | fields title | rename title as index | sort index</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <table> <title>Index and Sourcetypes</title> <search> <query>| tstats values(sourcetype) as sourcetypes dc(host) as hosts_count dc(source) as sources_count where index IN($selected_index$) by index</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> that's the query I used which worked.
Hello!  If I will use streamfwd like a light forwarder, is it possible to use outputs.conf ?  Could you provide me with your config for this scenario?  I can't find information in the documentatio... See more...
Hello!  If I will use streamfwd like a light forwarder, is it possible to use outputs.conf ?  Could you provide me with your config for this scenario?  I can't find information in the documentation...  thanks
thank you for the eg i will take a look.  also i tried to do the eval bin but it would not let me do an if or case statement to set the bin size.  Do you have an example
Hello picklerick, I was trying to do a compare to today and last week but based on volume of data in the over night i wanted the data in an hour bucket and during the day wanted 10minute buckes.  Th... See more...
Hello picklerick, I was trying to do a compare to today and last week but based on volume of data in the over night i wanted the data in an hour bucket and during the day wanted 10minute buckes.  This would be for an alert where you cant use tokens based on time to set the span.  So it would be for the whole search
My bad, i was type it hardcode and typo in Capital. it worked if i change it to all lowercase 
ikaw
@livehybrid  Yes, logs needs to be forwarded to SC4S. ExtremeCloud IQ will be sending logs in the legacy SYSLOG format RFC3164. Can we use app parser configuration file on the syslog server where we... See more...
@livehybrid  Yes, logs needs to be forwarded to SC4S. ExtremeCloud IQ will be sending logs in the legacy SYSLOG format RFC3164. Can we use app parser configuration file on the syslog server where we plan to receive Extreme AP logs in the legacy SYSLOG format RFC3164. Will this help in normalizing the data received from Extreme AP when tweaked as per log sample . Here is the resource I am referring to: https://splunk.github.io/splunk-connect-for-syslog/main/sources/   Also, should we need an add on or app to be installed or just defining app_parser conf file in Syslog help ?
Hi @AsmaF2025 , I was able to resolve my jQuery issue by re-running the scan. You can check out the replies in this Reddit post — they were quite helpful: https://www.reddit.com/r/Splunk/comments/1... See more...
Hi @AsmaF2025 , I was able to resolve my jQuery issue by re-running the scan. You can check out the replies in this Reddit post — they were quite helpful: https://www.reddit.com/r/Splunk/comments/1g8ijdg/splunk_jquery_upgrade_to_35
Hello @PickleRick , I have an alert which run on every 5mins, there are data from 23 index, which indexes are meeting the criterion that have been created as an alert, on the same index i need to ... See more...
Hello @PickleRick , I have an alert which run on every 5mins, there are data from 23 index, which indexes are meeting the criterion that have been created as an alert, on the same index i need to ran diffrerent search to know the error message and match the event with the time stamp with an deviation of +0.5 Sec or -0.5 sec in the event and need to show the results in an another alert. Please let me know if there are anything. Thanks!
Of course. I know it, you know it... But people tend to forget it. Way too many times I've seen average speed or average fuel consumption calculated by averaging multiple averages.
@mchoudhary  You can try below sample xml dashboard code <form> <label>Index Sourcetype and Host/Source Explorer</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdow... See more...
@mchoudhary  You can try below sample xml dashboard code <form> <label>Index Sourcetype and Host/Source Explorer</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="selectedIndex" searchWhenChanged="true"> <label>Select Index:</label> <choice value="*">All</choice> <search> <query> | rest splunk_server=local /services/data/indexes | search disabled=0 | stats count by title | fields title | rename title as index_name </query> <earliest>-1s</earliest> <latest>now</latest> </search> <fieldForLabel>index_name</fieldForLabel> <fieldForValue>index_name</fieldForValue> <default>*</default> </input> </fieldset> <row> <panel> <table> <title>Details for Index: $selectedIndex$</title> <search> <query> | tstats values(sourcetype) as sourcetypes, dc(host) as host_count, dc(source) as source_count where index="$selectedIndex$" <!-- Use the token here --> by index | eval sourcetypes = mvjoin(sourcetypes, ", ") </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="count">20</option> <option name="refresh">5m</option> <option name="refresh.auto.interval">300</option> </table> </panel> </row> </form> Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@bgresty I strongly agree with what @livehybrid says. Your 14 characters highlight, pointing towards a configuration that's unintentionally filtering or misinterpreting data based on this field's le... See more...
@bgresty I strongly agree with what @livehybrid says. Your 14 characters highlight, pointing towards a configuration that's unintentionally filtering or misinterpreting data based on this field's length which should be in your props/transforms conf.
@drodman29  As mentioned by everyone,  The action.email.domain_allowlist setting in alert_actions.conf performs a strict, literal string match against the domain part of the email address. It doe... See more...
@drodman29  As mentioned by everyone,  The action.email.domain_allowlist setting in alert_actions.conf performs a strict, literal string match against the domain part of the email address. It does not natively support wildcards like *.mydomain.com So, when you set action.email.domain_allowlist = *.mydomain.com, Splunk is literally looking for an email address like user@*.mydomain.com, which is not a valid email domain format and thus won't match a@temp.mydomain.com or b@perm.mydomain.com So i believe possible workaround you can do is Scripted Alert Action options. Instead of using the built-in sendemail alert action directly from the Splunk UI for these specific alerts, you configure the alert to trigger a custom script. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@livehybrid @Kim  Yes you are right, streamfwd.conf natively only takes list of indexer and HEC only directly. The scenario i tested was using indexer discovery on the HF itself, Streamfwd can dyna... See more...
@livehybrid @Kim  Yes you are right, streamfwd.conf natively only takes list of indexer and HEC only directly. The scenario i tested was using indexer discovery on the HF itself, Streamfwd can dynamically take the list of indexers from the outputs.conf. It worked without any issues. The key is that the streamfwd process, after parsing network data, will then attempt to forward it. If its own streamfwd.conf doesn't specify a direct S2S or HEC target, it will fall back to using the Splunk forwarding mechanism configured in outputs.conf.
bit nerdy here, but @PickleRick if you know in advance what you want to do and can figure out the maths, then you can do others, e.g. post aggregation of average is simply sum/count index=_audit | e... See more...
bit nerdy here, but @PickleRick if you know in advance what you want to do and can figure out the maths, then you can do others, e.g. post aggregation of average is simply sum/count index=_audit | eval r=random() % 100 | timechart span=10m avg(r) as avg_r sum(r) as s_r count | eval h=strftime(_time, "%H"), d=strftime(_time, "%d"), m=strftime(_time, "%M") | eventstats sum(count) as count_1_hour sum(s_r) as sum_r_1_hour by d h | where (h>=7 AND h<19 OR m=0) | eval avg_r = if(h<7 OR h>=19, sum_r_1_hour / count_1_hour, avg_r) | fields - d h m sum_r_1_hour count_1_hour s_r percentiles on the other hand are a little more complicated. I suspect using the sitimechart function will do a lot of the work for the first pass and then it's a bit of post_processing of the psrsvd_rd* variables. I'm not totally sure how the si_* values are aggregated for percentiles, I did play around with it some years ago and got lost in the weeds, but it was a somewhat interesting exercise
Hi @shaunm001  Edit - you beat me to it Seems you've got it sorted anyway. You need to use $click.value2$ instead. I've included an example below to test this. It doesnt seem well documented but... See more...
Hi @shaunm001  Edit - you beat me to it Seems you've got it sorted anyway. You need to use $click.value2$ instead. I've included an example below to test this. It doesnt seem well documented but a few of the docs do reference click.value2 - it seems to suggest click.value sets the "X-axis value" which it presumably infers as _time. <form version="1.1" theme="dark"> <label>Search and Filter Dashboard (Makeresults Base)</label> <description>A dashboard to search and filter generated logs with drilldown capabilities.</description> <!-- Define default token values --> <!-- Define the base search using makeresults --> <search id="baseSearch"> <query> | makeresults count=15 | eval _time = now() - round(random() * 86400) | streamstats count as rec_num | eval UserId = case( rec_num % 3 == 0, "user_A", rec_num % 3 == 1, "user_B", rec_num % 3 == 2, "user_C" ) | eval subject = case( rec_num % 5 == 0, "Email sent successfully", rec_num % 5 == 1, "Login attempt failed", rec_num % 5 == 2, "Report generated", rec_num % 5 == 3, "File upload complete", rec_num % 5 == 4, "System error detected" ) | eval Operation = case( rec_num % 4 == 0, "LOGIN", rec_num % 4 == 1, "EMAIL", rec_num % 4 == 2, "UPLOAD", rec_num % 4 == 3, "REPORT" ) | eval messageId = "msg-" . rec_num . "-" . substr(md5(_time), 1, 8) | eval ClientInfoString = "{\"os\":\"Windows\",\"browser\":\"Chrome\",\"ipAddress\":\"192.168.1.10" . (rec_num % 5) . "\"}" | eval _raw = json("{\"os\":\"Windows\",\"browser\":\"Chrome\",\"ipAddress\":\"192.168.1.10" . (rec_num % 5) . "\"}") | spath | fields _time, UserId, subject, Operation, messageId, ClientInfoString, ipAddress </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <fieldset submitButton="true"> <input type="text" token="messageIdTok"> <label>Message ID:</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="text" token="userIdTok"> <label>User ID:</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="text" token="subjectTok"> <label>Subject Contains:</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="text" token="ipAddressTok"> <label>IP Address:</label> <default>*</default> <initialValue>*</initialValue> </input> </fieldset> <row> <panel> <table> <title>messageIdTok=$messageIdTok$ userIdTok=$userIdTok$ subjectTok=$subjectTok$ ipAddressTok=$ipAddressTok$</title> <search base="baseSearch"> <query> | search subject="*$subjectTok$*" UserId="$userIdTok$" ipAddress="$ipAddressTok$" messageId="$messageIdTok$" </query> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <condition field="subject"> <set token="subjectTok">$row.subject$</set> <set token="messageIdTok">*</set> <set token="userIdTok">*</set> <set token="ipAddressTok">*</set> </condition> <condition field="messageId"> <set token="messageIdTok">$click.value2$</set> <set token="subjectTok">*</set> <set token="userIdTok">*</set> <set token="ipAddressTok">*</set> </condition> <condition field="UserId"> <set token="messageIdTok">*</set> <set token="subjectTok">*</set> <set token="userIdTok">$row.UserId$</set> <set token="ipAddressTok">*</set> </condition> <condition field="ipAddress"> <set token="messageIdTok">*</set> <set token="subjectTok">*</set> <set token="userIdTok">*</set> <set token="ipAddressTok">$row.ipAddress$</set> </condition> </drilldown> </table> </panel> </row> </form>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Nevermind....looks like using $click.value2$ solves the problem.