All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. I added an idea https://ideas.splunk.com/ideas/EID-I-2471 Feel free to upvote and/or comment
It's the MSCS and google TA. On SHC, inputs.conf are removed from default and local still the error appears as below on all the members. ERROR ModularInputs [1990877 ConfReplicationThread] - Unable ... See more...
It's the MSCS and google TA. On SHC, inputs.conf are removed from default and local still the error appears as below on all the members. ERROR ModularInputs [1990877 ConfReplicationThread] - Unable to initialize modular input "mscs_storage_table" defined in the app "splunk_ta_microsoft-cloudservices": Introspecting scheme=mscs_storage_table: script running failed
You have to configure the webhook input as described in the shared dcos. Launch the Microsoft Teams Add-on for Splunk. Select Inputs > Create New Input > Teams Webhook. Have you done it? If not ... See more...
You have to configure the webhook input as described in the shared dcos. Launch the Microsoft Teams Add-on for Splunk. Select Inputs > Create New Input > Teams Webhook. Have you done it? If not create the input first and then: The webhook address will be available via the internal ip on the instance where you've configured the webhook and you have to use the port that you've configured during the webhook setup.  curl <internal_ip_of_your_splunk_instance>:<the_configured_port> -d '{"value": "test"}' For an initial test you could execute the curl on the same instance where you've configured the webhook.  curl 127.0.0.1:<the_configured_port> -d '{"value": "test"}' To make the webhook address publicly accessible there are different ways of course as mentioned in the documentation The webhook must be a publicly accessible, HTTPS-secured endpoint that is addressable via a URL. You have two options to set up the Splunk instance running the Teams add-on. You can make it publicly accessible via HTTPS. Or you can use a load balancer, reverse proxy, tunnel, etc. in front of your Splunk instance running the add-on. The second option here can be preferable if you don't want to expose the Splunk heavy forwarder to the internet, as the public traffic terminates at that demarcation and then continues on internally to the Splunk heavy forwarder.
I have been trying to set up splunk on my Kubernetes cluster so i can use it with a python script to access the rest API. i have a splunk enterprise standalone instance running. i used traefik ... See more...
I have been trying to set up splunk on my Kubernetes cluster so i can use it with a python script to access the rest API. i have a splunk enterprise standalone instance running. i used traefik ingress to expose port 8089      apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: splunk-ingress namespace: splunk annotations: cert-manager.io/cluster-issuer: letsencrypt-issuer traefik.ingress.kubernetes.io/router.entrypoints: websecure spec: ingressClassName: common-traefik tls: - hosts: - splunk.example.com secretName: app-certificate rules: - host: splunk.example.com http: paths: - path: / pathType: Prefix backend: service: name: splunk-stdln-standalone-service port: number: 8089       when i try to curl to the client it returns internal server error       curl -X POST https://splunk.example.com/services/auth/login --data-urlencode username=admin --data-urlencode password=<mysplunkpassword> -k -v       output:     * Host splunk.example.com:443 was resolved. * IPv6: (none) * IPv4: xx.xx.xxx.xxx * Trying xx.xx.xxx.xxx:443... * Connected to splunk.example.com (xx.xx.xxx.xxx) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * (304) (IN), TLS handshake, Server hello (2): * (304) (IN), TLS handshake, Unknown (8): * (304) (IN), TLS handshake, Certificate (11): * (304) (IN), TLS handshake, CERT verify (15): * (304) (IN), TLS handshake, Finished (20): * (304) (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=splunk.example.com * start date: Dec 6 23:53:06 2024 GMT * expire date: Mar 6 23:53:05 2025 GMT * issuer: C=US; O=Let's Encrypt; CN=R10 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://splunk.example.com/services/auth/login * [HTTP/2] [1] [:method: POST] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: splunk.example.com] * [HTTP/2] [1] [:path: /services/auth/login] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [content-length: 34] * [HTTP/2] [1] [content-type: application/x-www-form-urlencoded] > POST /services/auth/login HTTP/2 > Host: splunk.example.com > User-Agent: curl/8.7.1 > Accept: */* > Content-Length: 34 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 34 bytes < HTTP/2 500 < content-length: 21 < date: Mon, 09 Dec 2024 06:54:50 GMT < * Connection #0 to host splunk.example.com left intact Internal Server Error%     when i port forward to localhost the curl works     curl -X POST https://localhost:8089/services/auth/login --data-urlencode username=admin --data-urlencode password=<mysplunkpassword> -k -v     output:     Note: Unnecessary use of -X or --request, POST is already inferred. * Host localhost:8089 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:8089... * Connected to localhost (::1) port 8089 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * (304) (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 / [blank] / UNDEF * ALPN: server did not agree on a protocol. Uses default. * Server certificate: * subject: CN=SplunkServerDefaultCert; O=SplunkUser * start date: Dec 9 02:21:04 2024 GMT * expire date: Dec 9 02:21:04 2027 GMT * issuer: C=US; ST=CA; L=San Francisco; O=Splunk; CN=SplunkCommonCA; emailAddress=support@splunk.com * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. * using HTTP/1.x > POST /services/auth/login HTTP/1.1 > Host: localhost:8089 > User-Agent: curl/8.7.1 > Accept: */* > Content-Length: 34 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 34 bytes < HTTP/1.1 200 OK < Date: Mon, 09 Dec 2024 06:59:54 GMT < Expires: Thu, 26 Oct 1978 00:00:00 GMT < Cache-Control: no-store, no-cache, must-revalidate, max-age=0 < Content-Type: text/xml; charset=UTF-8 < X-Content-Type-Options: nosniff < Content-Length: 204 < Connection: Keep-Alive < X-Frame-Options: SAMEORIGIN < Server: Splunkd < <response> <sessionKey> {some sessionKey...} </sessionKey> <messages> <msg code=""></msg> </messages> </response> * Connection #0 to host localhost left intact      I am using default confs not sure if i need to update my server.conf  for this more context: i checked the splunkd.log from when i made the request and i get these logs: 12-09-2024 17:19:36.904 +0000 WARN SSLCommon [951 HTTPDispatch] - Received fatal SSL3 alert. ssl_state='SSLv3 read client key exchange A', alert_description='bad certificate'. 12-09-2024 17:19:36.904 +0000 WARN HttpListener [951 HTTPDispatch] - Socket error from 192.168.xx.xx:52528 while idling: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate
Hi,   What do you mean by small (&)......by lowering the fonts?
Hi @yuanliu ,    Thanks for your reply.    Sorry for not briefing it properly. 1. data input is from lookup site_ids.csv is  displayname                   prefix abc12                      ... See more...
Hi @yuanliu ,    Thanks for your reply.    Sorry for not briefing it properly. 1. data input is from lookup site_ids.csv is  displayname                   prefix abc12                                23456789 qwe14                               78945612 rty12                                 12356789 yuui13                               56897412 Here I need to display displayname field value in input dropdown as a multi select value, also I would like to pass label that is prefix to my search as well. lets say, If i select displayname fields values as  abc12                                qwe14                                rty12        I need to see these values in input dropdown and need to pass the below prefix to the search in dashboard panel      23456789 78945612 12356789  as ("23456789", "78945612","12356789 "), which needs to be used in IN command                   Here is the search where i will be using the prefix token in search index=abc sourcetype=sc* | fields _time index Eventts FIELD* source IPC | search IPC IN ($my_token$) | fields - source Hope I'm clear now, please let me know if there are anything. Thanks!
Sorry i probably didnt expressed myself well. i want that wildcards gets taken into account. So based from the table i posted as example i would want results like this title totalEventCoun... See more...
Sorry i probably didnt expressed myself well. i want that wildcards gets taken into account. So based from the table i posted as example i would want results like this title totalEventCount frozenTimePeriodInSecs NumOfSearches _audit 771404957 188697600 23348  (_audit + _*) _configtracker 717 2592000 22311 (_configtracker + _*) _internal 7039169453 15552000 24098 (_internal + _*)
@PickleRick I mean to say. The value of the TASKIDUPDATED field is always unique value after applying checkpoint value event should be ingested only once and not multiple times.   Below is the set... See more...
@PickleRick I mean to say. The value of the TASKIDUPDATED field is always unique value after applying checkpoint value event should be ingested only once and not multiple times.   Below is the setting I am currently using for db connect.  connection = VIn disabled = 0 index = group_data index_time_mode = current interval = */10 * * * * max_rows = 0 mode = rising query = SELECT * FROM "WMCDB"."KLDGSF_ROUPOVERVIEW"\ WHERE TASKIDUPDATED < ?\ ORDER BY TASKIDUPDATED DESC query_timeout = 30 sourcetype = overview_packgroup tail_rising_column_init_ckpt_value = {"value":null,"columnType":null} tail_rising_column_name = TASKIDUPDATED tail_rising_column_number = 3 input_timestamp_column_number = 10 input_timestamp_format =
Yes, you're right about timechart, but I wonder what the purpose of rendering a timechart where there are no datapoints at any time span. I've certainly used an html panel instead of a null timechar... See more...
Yes, you're right about timechart, but I wonder what the purpose of rendering a timechart where there are no datapoints at any time span. I've certainly used an html panel instead of a null timechart. I can't see why you'd want to do that to display something empty
Hi Team, We have a requirement to mask/filter data before ingestion at Splunk cloud environment. Custom has Splunk Cloud .  I am reading through ingest processor and ingest actions documentation. S... See more...
Hi Team, We have a requirement to mask/filter data before ingestion at Splunk cloud environment. Custom has Splunk Cloud .  I am reading through ingest processor and ingest actions documentation. Sounds like both do pretty much the same capability in Splunk cloud.    Is there any major difference in these?    Thanks, SGS
Thanks @bowesmana , yes, this is the typical "solution" I've seen around, however this does not work on `timechart` and similar time bucket constrained expressions. Certainly if one is after just a ... See more...
Thanks @bowesmana , yes, this is the typical "solution" I've seen around, however this does not work on `timechart` and similar time bucket constrained expressions. Certainly if one is after just a solve for `stats` this definitely does work. This is my query: index=* source=squid_proxy_logs | search (warn* OR error*) AND _raw!="*SendEcho*" AND (NOT url=*) AND _raw!="*setrlimit: RLIMIT_NOFILE*" | timechart span=5m count(_raw) as hits I've tried appendpipe, append etc tricks with a variety of expressions such as: | appendpipe [| makeresults | where hits=0] | appendpipe [|makeresults | stats count(_raw) as count | where count=0 ] and a few other alternates I've seen around, but all have the same issue, they work great on a single vector stats return result being null/empty, but with the timechart this doesn't really play well unfortunately.. I think the closest I can get is where I have to makeresults myself into the spans and bins I need and then use a query to aggregate the counts into those predefined bins I've carved up, and these bins of course would be generated based on the search query timerange so it would work for historical periods as well as realtime ... Just need to rejig my query I think to do something like this so it always produces a fixed matrix/tabular output and with the respective count values for that point in time, rather than trying to build a dataset from where there are just zero values to start with (as is the case if there are NO records matching)... so it kinda makes sense why this happens...  
And a technique I use a reasonable amount in dashboards is to have a panel for results and a panel for no results hidden behind tokens, e.g. <form version="1.1" theme="light"> <label>tmp4</label> ... See more...
And a technique I use a reasonable amount in dashboards is to have a panel for results and a panel for no results hidden behind tokens, e.g. <form version="1.1" theme="light"> <label>tmp4</label> <fieldset submitButton="false"> <input type="text" token="user" searchWhenChanged="true"> <label>User</label> </input> </fieldset> <row> <panel> <html depends="$no_results$"> <h1>No results found</h1> </html> <table depends="$has_results$"> <search> <progress> <unset token="has_results"></unset> <unset token="no_results"></unset> </progress> <done> <eval token="has_results">if($job.resultCount$=0, null(), "true")</eval> <eval token="no_results">if($job.resultCount$&gt;0, null(), "true")</eval> </done> <query>index=_audit user=$user|s$</query> <earliest>-24h</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>
Just to reiterate here the general simple solution to this issue in case it gets read again, which has already been posted in this thread. All you need to do is to add the appendpipe clause to the e... See more...
Just to reiterate here the general simple solution to this issue in case it gets read again, which has already been posted in this thread. All you need to do is to add the appendpipe clause to the end of the search like this - where "NOUSER" is assumed not to exist, so without the appendpipe, will return no results found.   index=_audit user=NOUSER | appendpipe [ | stats count | where count=0 ]      
Wow thanks for rapid responses @SanjayReddy  and @PickleRick  Didn't expect such turnaround on my vent in this dead/old thread. However I really do appreciate the constructive feedback, and I cer... See more...
Wow thanks for rapid responses @SanjayReddy  and @PickleRick  Didn't expect such turnaround on my vent in this dead/old thread. However I really do appreciate the constructive feedback, and I certainly do understand the justification for why the stats/timechart functions as it does, it's just a shame I've been trawling most of those other linked threads and many hours of google searches to find many different suggested approaches, none which oddly seem to fit the bill for what is actually a fairly small, simple query/resultant expectation.. It's one of those things you just think, meh, this takes 30 secs in ANSI SQL, noSQL or any other RDBMS to produce the desired resultant matrix/vector, but in Splunk, I need my masters in SPL   Anyway, thank you again, wholeheartedly appreciate your positive and responsive attitudes given my pretty low-contribution post   I will check those threads you've provided which I haven't looked at before and if all else fails, as you've suggested, I'll post afresh 🥰
Hi, just stumbled upon this one. You've probably already resolved the issue, but for anyone who might have a similar issue in the future: Roles are not returned by Auth0 SAML assertion by default, b... See more...
Hi, just stumbled upon this one. You've probably already resolved the issue, but for anyone who might have a similar issue in the future: Roles are not returned by Auth0 SAML assertion by default, but you can implement a rule that fixes that. I've created a guide here a while back: https://isbyr.com/return-user-roles-in-auth0/ Hope that this helps.
You can do this at the end | eval title=coalesce(title, usedData) | fields - usedData | stats values(*) as * by title Note that you seem to pull in a bunch of macros that do not contain any index s... See more...
You can do this at the end | eval title=coalesce(title, usedData) | fields - usedData | stats values(*) as * by title Note that you seem to pull in a bunch of macros that do not contain any index searches
I think that using script should work. Just use sudo w/o password and with exact command if needed. Splunk has recognized this as a bug, but I haven’t yet Jira either estimated fix version/time.
I don't know what the GlobalMantics dataset is, but Splunk is not shipped with preloaded data, you need to ingest the data yourself, so if you have that dataset, you can ingest it in any version of S... See more...
I don't know what the GlobalMantics dataset is, but Splunk is not shipped with preloaded data, you need to ingest the data yourself, so if you have that dataset, you can ingest it in any version of Splunk
Do you mean in the second dashboard, there are inputs which are not selected as you wanted? If so, it's probably because the input token is named form.xxx, so if your input dropdown for Services is ... See more...
Do you mean in the second dashboard, there are inputs which are not selected as you wanted? If so, it's probably because the input token is named form.xxx, so if your input dropdown for Services is the token ShortConfigRuleName, then you should pass the URL with form.ShortConfigRuleName=$row.ShortConfigRuleName$
Not sure why you are seeing it executing randomly - that does not seem right - can you produce a test case. However, I use this regularly for updating lookups - all you need to do it reset the token... See more...
Not sure why you are seeing it executing randomly - that does not seem right - can you produce a test case. However, I use this regularly for updating lookups - all you need to do it reset the tokens following the end of the search that writes the lookup. See the <done> section below. Unsetting the form.* tokens will remove the inputs from the display and removing the non-form tokens will prevent the search from running until all 4 of the tokens are input. Note in the <search> below, I did it slightly differently using makeresults, so there is a _time field which is when the search runs. It will add the new data to the END of the lookup, so that may not be useful for you. Note that when using append=t the _time field will not get added to the existing lookup if it does not already exist. <form version="1.1" theme="light"> <label>tmp3</label> <description>Replicate time picker issue</description> <fieldset submitButton="true" autoRun="false"> <input type="text" token="usecasename" searchWhenChanged="false"> <label>Enter UseCaseName Here</label> </input> <input type="text" token="error" searchWhenChanged="false"> <label>Enter Error/Exception here</label> </input> <input type="text" token="impact" searchWhenChanged="false"> <label>Enter Impact here</label> </input> <input type="text" token="reason" searchWhenChanged="false"> <label>Enter Reason here</label> </input> </fieldset> <row depends="$hide$"> <panel> <table> <title></title> <search> <query> | makeresults | eval useCaseName="$usecasename$", "Error/Exception in logs"="$error$", Impact="$impact$", Reason="$reason$" | outputlookup append=t lookup_exceptions_all_usecase1.csv </query> <earliest>-24h</earliest> <latest></latest> <sampleRatio>1</sampleRatio> <done> <unset token="usecasename"></unset> <unset token="error"></unset> <unset token="impact"></unset> <unset token="reason"></unset> <unset token="form.usecasename"></unset> <unset token="form.error"></unset> <unset token="form.impact"></unset> <unset token="form.reason"></unset> </done> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>