All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ok. You are doing some strange things here. You're going several times over the same data extracting the same fields. You are doing negative matches. You're posting some partial search in pseudo-SPL ... See more...
Ok. You are doing some strange things here. You're going several times over the same data extracting the same fields. You are doing negative matches. You're posting some partial search in pseudo-SPL Just show us the source events (anonymized if need be) and describe the desired output and relation between events and output without using SPL.
In shcluster the scheduler distributes scheduled searches among shcluster members so that if you have 3 SHs 32 CPUs each, you have effectively 96 CPUs to distribute searches among. But a single sear... See more...
In shcluster the scheduler distributes scheduled searches among shcluster members so that if you have 3 SHs 32 CPUs each, you have effectively 96 CPUs to distribute searches among. But a single search is run on a single SH and its results are replicated to other members. Also show shcluster-status shows way more information than just "up".
shcluster status is up. If notable should trigger on only on and correlation searches only run on 1 search head, what is the point of having a shcluster. Also what will happen of reports that u... See more...
shcluster status is up. If notable should trigger on only on and correlation searches only run on 1 search head, what is the point of having a shcluster. Also what will happen of reports that use notable data. How will I control searches to be run on only 1 sh.
You didn't illustrate what is the expected results look like.  Based on the last stats in your OP, you only want to filter for first, last of people with a leave ticket, not to add any information ab... See more...
You didn't illustrate what is the expected results look like.  Based on the last stats in your OP, you only want to filter for first, last of people with a leave ticket, not to add any information about the ticket.  Is this correct? In that case, just extract first and last in the second search and use it as subsearch, like this. index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] [search index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | rex field=description "Leaver Request for (?<first>\S+) (?<last>\S+) -" | fields first last] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity Note the extraction of first and last depends on the precise format in description; additionally, it assumes that first and last contains no white space.
I have  2 field that holds 3 values Field 1 values= a,b,c Field 2 values= 1,2,3 Is there a way to table without using Join/append/appendcols command? this is how my search q... See more...
I have  2 field that holds 3 values Field 1 values= a,b,c Field 2 values= 1,2,3 Is there a way to table without using Join/append/appendcols command? this is how my search query looks so far but im getting this wierd results index= example sourcetype=example1 |search "example" |rex field=text "???<field1>" |rex field=text "OTL<field1>" ...exisiting search query |appendcols index= example sourcetype=example1 |search "example" |rex field=text "???<field1>" |rex field=text "OTL<field1>" |search field1 != c |rex field=text "<field2>" |table field1 field2 |search field2= 1 |append [index= example sourcetype=example1 |search "example" |rex field=text "???<field1>" |rex field=text "OTL<field1>" |search field1 != a field1 !=b |rex field=text "<field2>" |table field1 field2 |search field2= 2] the weird results I'm getting is   
To elaborate on @jawahir007 's answer. What you see "in settings" is forwarder monitoring. It only shows you what it can read from forwarder's internal logs sent to your Splunk server. It shows your... See more...
To elaborate on @jawahir007 's answer. What you see "in settings" is forwarder monitoring. It only shows you what it can read from forwarder's internal logs sent to your Splunk server. It shows your forwarder so it means your output on the forwarder is set correctly to your Splunk server and the data if properly forwarded. I'm assuming so far no "production" data is being forwarded, just the internal forwarder's logs. What you're trying to do - add an input from remote forwarder is something completely different which is done with a Deployment Server functionality. Typically in a big setup a Deployment Server is an additional server which "governs" configuration of its deployment clients (usually forwarders). In your case, as you have just one Splunk server, you must point your forwarder to your server as @jawahir007 showed. BTW, in production use you normally don't use the GUI to add remote inputs but that's a story for another time
Ouch. 1. I'd go for blacklisting events at the source forwarder as @isoutamo already hinted. It's way closer to source and it saves you a lot of bandwidth and CPU downstream. 2. If possible, use XM... See more...
Ouch. 1. I'd go for blacklisting events at the source forwarder as @isoutamo already hinted. It's way closer to source and it saves you a lot of bandwidth and CPU downstream. 2. If possible, use XML formatted windows events. 3. As far as I remember, modern windows inputs by default set sourcetype as just WinEventLog or XMLWinEventLog. The channels are specified in the source field, not in the sourcetype. So your whole props stanza will not match. 4. Yes, order of operations does matter but yours is pretty OK. (but the WinEventCode5156Drop transform is pointless since next you're setting all events' queue to nullQueue).
1. Why would you use version 7.0.3??? 2. Why use the container anyway? 3. You're exposing port 8000 as 80. Are you planning on running unprotected HTTP? 4. Did you look into the logs? 5. We have ... See more...
1. Why would you use version 7.0.3??? 2. Why use the container anyway? 3. You're exposing port 8000 as 80. Are you planning on running unprotected HTTP? 4. Did you look into the logs? 5. We have no idea what is in your entrypoint.sh 6. Why not just install the rpm (even if inside the container)?
Ok. So you are simply extracting the fields using some predefined "anchor points". You are in for a treat if ever your "constant" parts of your event change. It would be best if you could - as I sai... See more...
Ok. So you are simply extracting the fields using some predefined "anchor points". You are in for a treat if ever your "constant" parts of your event change. It would be best if you could - as I said at the beginning - do something with the data as it goes into your system. Without it any searching across your data will be hugely inefficient. In current situation it would probably be best to extract whole rows, then do mvexpand and then extract single fields from each line. You could do it by "counting" quotes but there's one caveat. It's trivial if you assume your field's contents cannot contain escaped quotes. It's getting a bit tricky if you can have escaped quotes. It's getting annoyingly complicated if you can have escaped quotes and escaped backslashes in your field values,
This would more efficient if indexA was a lookup table, but this query should get you started.  Others may bristle at the use of join, but they are welcome to submit alternatives.  index=indexb O... See more...
This would more efficient if indexA was a lookup table, but this query should get you started.  Others may bristle at the use of join, but they are welcome to submit alternatives.  index=indexb OR index=indexa | stats values(*) as * by transactionID | join customerID [search index=indexa] | table timestamp customerID transactionID status type  
Hi @dixa0123, SplunkWeb uses hidden field attributes to identify aggregations for trellis mode in Simple XML. (I haven't tried this in Dashboard Studio.) Here's a sample search that summarizes data,... See more...
Hi @dixa0123, SplunkWeb uses hidden field attributes to identify aggregations for trellis mode in Simple XML. (I haven't tried this in Dashboard Studio.) Here's a sample search that summarizes data, calculates a global mean, reformats the results, and then uses the global mean as an overlay in trellis mode: index=_internal | timechart limit=10 span=1m usenull=f useother=f count as x by component | untable _time component x ``` calculate a global mean ``` | eventstats avg(x) as tmp ``` append temporary events to hold the mean as a series ``` | appendpipe [| stats values(tmp) as x by _time | eval component="tmp" ] ``` reformat the results for trellis ``` | xyseries _time component x ``` disassociate the tmp field from aggregations to use as an overlay ``` | eval baseline=tmp ``` remove the tmp field ``` | fields - tmp  
Hi @catta99, You probably want to start with buttons disabled and then enabled them when the dashboard's async searches are done. You can use SplunkJS to attach search:done event handlers to your se... See more...
Hi @catta99, You probably want to start with buttons disabled and then enabled them when the dashboard's async searches are done. You can use SplunkJS to attach search:done event handlers to your searches (see below). A complex dashboard (multiple searches, multiple buttons, etc.) may require a more complex solution. You can find more information in the SplunkJS documentation or more generally, in your favorite web development resources (or AI stack, if you use one). <!-- button_test.xml --> <dashboard version="1.1" theme="light" script="button_test.js"> <label>button_test</label> <search id="search1"> <query>| stats count</query> <earliest>-24h</earliest> <latest>now</latest> </search> <row> <panel> <html> <!-- assign a value to the disabled attribute to pass SplunkWeb's Simple XML validation --> <button id="button1" disabled="disabled">Button 1</button> </html> </panel> </row> </dashboard> // button_test.js require([ "jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!" ], function($, mvc) { search1 = splunkjs.mvc.Components.get("search1"); search1.on("search:done", function(properties) { $("#button1").prop("disabled", false); }); $("#button1").on("click", function() { alert("Button 1 clicked."); }); });  
Hi @Ethil, To include time values from form inputs, SplunkWeb sends a rendered version of the dashboard XML to the pdfgen service. For example, given the Simple XML source: <form version="1.1" the... See more...
Hi @Ethil, To include time values from form inputs, SplunkWeb sends a rendered version of the dashboard XML to the pdfgen service. For example, given the Simple XML source: <form version="1.1" theme="light"> <label>my_dashboard</label> <fieldset submitButton="false"> <input type="time" token="time_tok"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults | addinfo</query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> View the dashboard in SplunkWeb and change the time range to Earliest: -1h@h and Latest: @h. When you export the dashboard to PDF, SplunkWeb renders the following static dashboard: <dashboard> <label>my_dashboard</label> <row> <panel> <table> <search> <query>| makeresults | addinfo</query> <earliest>-1h@h</earliest> <latest>@h</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </dashboard> Note that the form element is now a dashboard element, the fieldset element has been removed, and the time_tok.earliest and time_tok.latest token values have been propagated to the search earliest and latest elements. The dashboard is then XML-encoded: &lt;dashboard&gt; &lt;label&gt;my_dashboard&lt;/label&gt; &lt;row&gt; &lt;panel&gt; &lt;table&gt; &lt;search&gt; &lt;query&gt;| makeresults | addinfo&lt;/query&gt; &lt;earliest&gt;-1h@h&lt;/earliest&gt; &lt;latest&gt;@h&lt;/latest&gt; &lt;/search&gt; &lt;option name="drilldown"&gt;none&lt;/option&gt; &lt;option name="refresh.display"&gt;progressbar&lt;/option&gt; &lt;/table&gt; &lt;/panel&gt; &lt;/row&gt; &lt;/dashboard&gt; Finally, the result is sent to the pdfgen service using the URL-encoded input-dashboard-xml parameter, illustrated here using curl over the management port (SplunkWeb uses a SplunkWeb endpoint) with line breaks removed: curl -k -u admin -o my_dashboard_last_hour.pdf https://localhost:8089/services/pdfgen/render --data-urlencode 'input-dashboard-xml=&lt;dashboard&gt;&lt;label&gt;my_dashboard&lt;/label&gt;&lt;row&gt;&lt;panel&gt;&lt;table&gt;&lt;search&gt;&lt;query&gt;| makeresults | addinfo&lt;/query&gt;&lt;earliest&gt;-1h@h&lt;/earliest&gt;&lt;latest&gt;@h&lt;/latest&gt;&lt;/search&gt;&lt;option name="drilldown"&gt;none&lt;/option&gt;&lt;option name="refresh.display"&gt;progressbar&lt;/option&gt;&lt;/table&gt;&lt;/panel&gt;&lt;/row&gt;&lt;/dashboard&gt;' You can pass any static Simple XML to the pdfgen service; it doesn't need to be associated with a saved dashboard: curl -k -u admin -o hello.pdf https://localhost:8089/services/pdfgen/render --data-urlencode 'input-dashboard-xml=&lt;dashboard&gt;&lt;label&gt;Hello, World!&lt;/label&gt;&lt;/dashboard&gt;'  
I have this docker file when my base image is red-hat9    ENV SPLUNK_PRODUCT splunk ENV SPLUNK_VERSION 7.0.3 ENV SPLUNK_BUILD fa31da744b51 ENV SPLUNK_FILENAME splunk-${SPLUNK_VERSION}-${SPLUNK_B... See more...
I have this docker file when my base image is red-hat9    ENV SPLUNK_PRODUCT splunk ENV SPLUNK_VERSION 7.0.3 ENV SPLUNK_BUILD fa31da744b51 ENV SPLUNK_FILENAME splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz ENV SPLUNK_HOME /opt/splunk ENV SPLUNK_GROUP splunk ENV SPLUNK_USER splunk ENV SPLUNK_BACKUP_DEFAULT_ETC /var/opt/splunk ENV OPTIMISTIC_ABOUT_FILE_LOCKING=1 RUN groupadd -r ${SPLUNK_GROUP} \ && useradd -r -m -g ${SPLUNK_GROUP} ${SPLUNK_USER} RUN dnf -y update \ && dnf -y install --setopt=install_weak_deps=False glibc-langpack-en glibc-all-langpacks \ && localedef -i en_US -f UTF-8 en_US.UTF-8 || echo "Locale generation failed" \ && dnf clean all ENV LANG en_US.UTF-8 # pdfgen dependency RUN dnf -y install krb5-libs \ && dnf clean all # Download official Splunk release, verify checksum and unzip in /opt/splunk # Also backup etc folder, so it will be later copied to the linked volume RUN dnf -y install wget sudo RUN mkdir -p ${SPLUNK_HOME} \ && wget -qO /tmp/${SPLUNK_FILENAME} https://download.splunk.com/products/${SPLUNK_PRODUCT}/releases/${SPLUNK_VERSION}/linux/${SPLUNK_FILENAME} \ && wget -qO /tmp/${SPLUNK_FILENAME}.md5 https://download.splunk.com/products/${SPLUNK_PRODUCT}/releases/${SPLUNK_VERSION}/linux/${SPLUNK_FILENAME}.md5 \ && (cd /tmp && md5sum -c ${SPLUNK_FILENAME}.md5) \ && tar xzf /tmp/${SPLUNK_FILENAME} --strip 1 -C ${SPLUNK_HOME} \ && rm /tmp/${SPLUNK_FILENAME} \ && rm /tmp/${SPLUNK_FILENAME}.md5 \ && dnf -y remove wget \ && dnf clean all \ && mkdir -p /var/opt/splunk \ && cp -R ${SPLUNK_HOME}/etc ${SPLUNK_BACKUP_DEFAULT_ETC} \ && rm -fR ${SPLUNK_HOME}/etc \ && chown -R ${SPLUNK_USER}:${SPLUNK_GROUP} ${SPLUNK_HOME} \ && chown -R ${SPLUNK_USER}:${SPLUNK_GROUP} ${SPLUNK_BACKUP_DEFAULT_ETC} COPY etc/ /opt/splunk/etc/ COPY license.xml /splunk-license.xml COPY entrypoint.sh /sbin/entrypoint.sh RUN chmod +x /sbin/entrypoint.sh EXPOSE 9998/tcp EXPOSE 9999/tcp WORKDIR /opt/splunk ENV SPLUNK_CMD edit user admin -password admin -auth admin:changeme --accept-license --no-prompt ENV SPLUNK_CMD_1 add licenses /splunk-license.xml -auth admin:admin ENV SPLUNK_START_ARGS --accept-license --answer-yes VOLUME [ "/opt/splunk/etc", "/opt/splunk/var" ] ENTRYPOINT ["/sbin/entrypoint.sh"] CMD ["start-service"] I also mount volumes in /data/splunk  And use this command to run the container from the host  docker run \ --name splunk \ --hostname splunk \ -d \ -p 80:8000 \ -p 8088:8088 \ -p 8089:8089 \ -p 9998:9998 \ -p 9999:9999 \ -v $splunkVarRoot:/opt/splunk/var \ -v $splunkEtcRoot:/opt/splunk/etc \ -e "SPLUNK_START_ARGS=--accept-license --answer-yes" \ $IMPL_DOCKER_REPO/$splunkVersion docker run \ --name splunk \ --hostname splunk \ -d \ -p 80:8000 \ -p 8088:8088 \ -p 8089:8089 \ -p 9998:9998 \ -p 9999:9999 \ -v /data/splunk/var:/opt/splunk/var \ -v /data/splunk/etc:/opt/splunk/etc \ -e "SPLUNK_START_ARGS=--accept-license --answer-yes" \ my_image The UI is working and seems ok but I don't see any data and I get this 'kv store process terminated abnormally exit code 1'  What should I do
Based on your example and REGEX this should work. See https://regex101.com/r/puu59N/1 . Probably what you get from windows to Splunk is somehow different and for that reason it didn't match to your r... See more...
Based on your example and REGEX this should work. See https://regex101.com/r/puu59N/1 . Probably what you get from windows to Splunk is somehow different and for that reason it didn't match to your regex. r. Ismo
Hi Are you sure that indexers are the first full splunk instance from your source? If there is any like HF before indexers then you must add that props.conf there as it has effected only in 1st full ... See more...
Hi Are you sure that indexers are the first full splunk instance from your source? If there is any like HF before indexers then you must add that props.conf there as it has effected only in 1st full splunk instance. r. Ismo
Hi a nice place to test regex is regex101.com. https://regex101.com/r/5maP5V/1 here is one example how this can achieve. | rex field=msg_old "\b(?<msg_keyword>full)\b" If you want select only eve... See more...
Hi a nice place to test regex is regex101.com. https://regex101.com/r/5maP5V/1 here is one example how this can achieve. | rex field=msg_old "\b(?<msg_keyword>full)\b" If you want select only events which have word full in field msg_old then you should try | regex msg_old = "\bfull\b"  r. Ismo
As the error message says in your screenshot, Configure the universal forwarder as a Deployment Client to your Splunk server.   1. Enable Deployment Client on the Universal Forwarder First, log in... See more...
As the error message says in your screenshot, Configure the universal forwarder as a Deployment Client to your Splunk server.   1. Enable Deployment Client on the Universal Forwarder First, log in to the server where the Universal Forwarder is installed. 2. Create a Deployment Client Configuration Edit or create the deploymentclient.conf file in the following path: $SPLUNK_HOME/etc/system/local/deploymentclient.conf Add the following configuration: [deployment-client] # Enable the deployment client disabled = false [target-broker:deploymentServer] # Specify the IP address or hostname and port of the Deployment Server targetUri = <deployment_server_ip>:<deployment_server_port> <deployment_server_ip>: IP address or hostname of the Splunk Deployment Server. <deployment_server_port>: The port configured for the Deployment Server (default is 8089). For example: [deployment-client] disabled = false [target-broker:deploymentServer] targetUri = 192.168.1.100:8089 3. Restart the Splunk Universal Forwarder To apply the changes, restart the Splunk Universal Forwarder: $SPLUNK_HOME/bin/splunk restart 4. Verify the Deployment Client Connection on the Deployment Server On the Splunk Deployment Server, go to: Settings > Forwarder Management. Under Clients, you should see the new Universal Forwarder listed as a deployment client. ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
Hi currently splunk haven't this kind of feature (e.g. sudo or run as in windows). There is one item in ideas.splunk.com https://ideas.splunk.com/ideas/E-I-15 which is not for exactly for this but I... See more...
Hi currently splunk haven't this kind of feature (e.g. sudo or run as in windows). There is one item in ideas.splunk.com https://ideas.splunk.com/ideas/E-I-15 which is not for exactly for this but I think it could be usable, if Splunk made decisions to do it. Currently the only way to fulfill this requirement is to create additional user, but as you are using SSO it generates it's own issues.... r. Ismo
Hi it seems that you have wrong cloud HEC endpoint. You should use https://http-inputs-<your stack>.splunkcloud.com/<endpoint>. See more here Send data to HTTP Event Collector There are some diff... See more...
Hi it seems that you have wrong cloud HEC endpoint. You should use https://http-inputs-<your stack>.splunkcloud.com/<endpoint>. See more here Send data to HTTP Event Collector There are some differences based on where and which experience your Cloud Stack is/has. r. Ismo