All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @dixa0123, SplunkWeb uses hidden field attributes to identify aggregations for trellis mode in Simple XML. (I haven't tried this in Dashboard Studio.) Here's a sample search that summarizes data,... See more...
Hi @dixa0123, SplunkWeb uses hidden field attributes to identify aggregations for trellis mode in Simple XML. (I haven't tried this in Dashboard Studio.) Here's a sample search that summarizes data, calculates a global mean, reformats the results, and then uses the global mean as an overlay in trellis mode: index=_internal | timechart limit=10 span=1m usenull=f useother=f count as x by component | untable _time component x ``` calculate a global mean ``` | eventstats avg(x) as tmp ``` append temporary events to hold the mean as a series ``` | appendpipe [| stats values(tmp) as x by _time | eval component="tmp" ] ``` reformat the results for trellis ``` | xyseries _time component x ``` disassociate the tmp field from aggregations to use as an overlay ``` | eval baseline=tmp ``` remove the tmp field ``` | fields - tmp  
Hi @catta99, You probably want to start with buttons disabled and then enabled them when the dashboard's async searches are done. You can use SplunkJS to attach search:done event handlers to your se... See more...
Hi @catta99, You probably want to start with buttons disabled and then enabled them when the dashboard's async searches are done. You can use SplunkJS to attach search:done event handlers to your searches (see below). A complex dashboard (multiple searches, multiple buttons, etc.) may require a more complex solution. You can find more information in the SplunkJS documentation or more generally, in your favorite web development resources (or AI stack, if you use one). <!-- button_test.xml --> <dashboard version="1.1" theme="light" script="button_test.js"> <label>button_test</label> <search id="search1"> <query>| stats count</query> <earliest>-24h</earliest> <latest>now</latest> </search> <row> <panel> <html> <!-- assign a value to the disabled attribute to pass SplunkWeb's Simple XML validation --> <button id="button1" disabled="disabled">Button 1</button> </html> </panel> </row> </dashboard> // button_test.js require([ "jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!" ], function($, mvc) { search1 = splunkjs.mvc.Components.get("search1"); search1.on("search:done", function(properties) { $("#button1").prop("disabled", false); }); $("#button1").on("click", function() { alert("Button 1 clicked."); }); });  
Hi @Ethil, To include time values from form inputs, SplunkWeb sends a rendered version of the dashboard XML to the pdfgen service. For example, given the Simple XML source: <form version="1.1" the... See more...
Hi @Ethil, To include time values from form inputs, SplunkWeb sends a rendered version of the dashboard XML to the pdfgen service. For example, given the Simple XML source: <form version="1.1" theme="light"> <label>my_dashboard</label> <fieldset submitButton="false"> <input type="time" token="time_tok"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults | addinfo</query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> View the dashboard in SplunkWeb and change the time range to Earliest: -1h@h and Latest: @h. When you export the dashboard to PDF, SplunkWeb renders the following static dashboard: <dashboard> <label>my_dashboard</label> <row> <panel> <table> <search> <query>| makeresults | addinfo</query> <earliest>-1h@h</earliest> <latest>@h</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </dashboard> Note that the form element is now a dashboard element, the fieldset element has been removed, and the time_tok.earliest and time_tok.latest token values have been propagated to the search earliest and latest elements. The dashboard is then XML-encoded: &lt;dashboard&gt; &lt;label&gt;my_dashboard&lt;/label&gt; &lt;row&gt; &lt;panel&gt; &lt;table&gt; &lt;search&gt; &lt;query&gt;| makeresults | addinfo&lt;/query&gt; &lt;earliest&gt;-1h@h&lt;/earliest&gt; &lt;latest&gt;@h&lt;/latest&gt; &lt;/search&gt; &lt;option name="drilldown"&gt;none&lt;/option&gt; &lt;option name="refresh.display"&gt;progressbar&lt;/option&gt; &lt;/table&gt; &lt;/panel&gt; &lt;/row&gt; &lt;/dashboard&gt; Finally, the result is sent to the pdfgen service using the URL-encoded input-dashboard-xml parameter, illustrated here using curl over the management port (SplunkWeb uses a SplunkWeb endpoint) with line breaks removed: curl -k -u admin -o my_dashboard_last_hour.pdf https://localhost:8089/services/pdfgen/render --data-urlencode 'input-dashboard-xml=&lt;dashboard&gt;&lt;label&gt;my_dashboard&lt;/label&gt;&lt;row&gt;&lt;panel&gt;&lt;table&gt;&lt;search&gt;&lt;query&gt;| makeresults | addinfo&lt;/query&gt;&lt;earliest&gt;-1h@h&lt;/earliest&gt;&lt;latest&gt;@h&lt;/latest&gt;&lt;/search&gt;&lt;option name="drilldown"&gt;none&lt;/option&gt;&lt;option name="refresh.display"&gt;progressbar&lt;/option&gt;&lt;/table&gt;&lt;/panel&gt;&lt;/row&gt;&lt;/dashboard&gt;' You can pass any static Simple XML to the pdfgen service; it doesn't need to be associated with a saved dashboard: curl -k -u admin -o hello.pdf https://localhost:8089/services/pdfgen/render --data-urlencode 'input-dashboard-xml=&lt;dashboard&gt;&lt;label&gt;Hello, World!&lt;/label&gt;&lt;/dashboard&gt;'  
I have this docker file when my base image is red-hat9    ENV SPLUNK_PRODUCT splunk ENV SPLUNK_VERSION 7.0.3 ENV SPLUNK_BUILD fa31da744b51 ENV SPLUNK_FILENAME splunk-${SPLUNK_VERSION}-${SPLUNK_B... See more...
I have this docker file when my base image is red-hat9    ENV SPLUNK_PRODUCT splunk ENV SPLUNK_VERSION 7.0.3 ENV SPLUNK_BUILD fa31da744b51 ENV SPLUNK_FILENAME splunk-${SPLUNK_VERSION}-${SPLUNK_BUILD}-Linux-x86_64.tgz ENV SPLUNK_HOME /opt/splunk ENV SPLUNK_GROUP splunk ENV SPLUNK_USER splunk ENV SPLUNK_BACKUP_DEFAULT_ETC /var/opt/splunk ENV OPTIMISTIC_ABOUT_FILE_LOCKING=1 RUN groupadd -r ${SPLUNK_GROUP} \ && useradd -r -m -g ${SPLUNK_GROUP} ${SPLUNK_USER} RUN dnf -y update \ && dnf -y install --setopt=install_weak_deps=False glibc-langpack-en glibc-all-langpacks \ && localedef -i en_US -f UTF-8 en_US.UTF-8 || echo "Locale generation failed" \ && dnf clean all ENV LANG en_US.UTF-8 # pdfgen dependency RUN dnf -y install krb5-libs \ && dnf clean all # Download official Splunk release, verify checksum and unzip in /opt/splunk # Also backup etc folder, so it will be later copied to the linked volume RUN dnf -y install wget sudo RUN mkdir -p ${SPLUNK_HOME} \ && wget -qO /tmp/${SPLUNK_FILENAME} https://download.splunk.com/products/${SPLUNK_PRODUCT}/releases/${SPLUNK_VERSION}/linux/${SPLUNK_FILENAME} \ && wget -qO /tmp/${SPLUNK_FILENAME}.md5 https://download.splunk.com/products/${SPLUNK_PRODUCT}/releases/${SPLUNK_VERSION}/linux/${SPLUNK_FILENAME}.md5 \ && (cd /tmp && md5sum -c ${SPLUNK_FILENAME}.md5) \ && tar xzf /tmp/${SPLUNK_FILENAME} --strip 1 -C ${SPLUNK_HOME} \ && rm /tmp/${SPLUNK_FILENAME} \ && rm /tmp/${SPLUNK_FILENAME}.md5 \ && dnf -y remove wget \ && dnf clean all \ && mkdir -p /var/opt/splunk \ && cp -R ${SPLUNK_HOME}/etc ${SPLUNK_BACKUP_DEFAULT_ETC} \ && rm -fR ${SPLUNK_HOME}/etc \ && chown -R ${SPLUNK_USER}:${SPLUNK_GROUP} ${SPLUNK_HOME} \ && chown -R ${SPLUNK_USER}:${SPLUNK_GROUP} ${SPLUNK_BACKUP_DEFAULT_ETC} COPY etc/ /opt/splunk/etc/ COPY license.xml /splunk-license.xml COPY entrypoint.sh /sbin/entrypoint.sh RUN chmod +x /sbin/entrypoint.sh EXPOSE 9998/tcp EXPOSE 9999/tcp WORKDIR /opt/splunk ENV SPLUNK_CMD edit user admin -password admin -auth admin:changeme --accept-license --no-prompt ENV SPLUNK_CMD_1 add licenses /splunk-license.xml -auth admin:admin ENV SPLUNK_START_ARGS --accept-license --answer-yes VOLUME [ "/opt/splunk/etc", "/opt/splunk/var" ] ENTRYPOINT ["/sbin/entrypoint.sh"] CMD ["start-service"] I also mount volumes in /data/splunk  And use this command to run the container from the host  docker run \ --name splunk \ --hostname splunk \ -d \ -p 80:8000 \ -p 8088:8088 \ -p 8089:8089 \ -p 9998:9998 \ -p 9999:9999 \ -v $splunkVarRoot:/opt/splunk/var \ -v $splunkEtcRoot:/opt/splunk/etc \ -e "SPLUNK_START_ARGS=--accept-license --answer-yes" \ $IMPL_DOCKER_REPO/$splunkVersion docker run \ --name splunk \ --hostname splunk \ -d \ -p 80:8000 \ -p 8088:8088 \ -p 8089:8089 \ -p 9998:9998 \ -p 9999:9999 \ -v /data/splunk/var:/opt/splunk/var \ -v /data/splunk/etc:/opt/splunk/etc \ -e "SPLUNK_START_ARGS=--accept-license --answer-yes" \ my_image The UI is working and seems ok but I don't see any data and I get this 'kv store process terminated abnormally exit code 1'  What should I do
Based on your example and REGEX this should work. See https://regex101.com/r/puu59N/1 . Probably what you get from windows to Splunk is somehow different and for that reason it didn't match to your r... See more...
Based on your example and REGEX this should work. See https://regex101.com/r/puu59N/1 . Probably what you get from windows to Splunk is somehow different and for that reason it didn't match to your regex. r. Ismo
Hi Are you sure that indexers are the first full splunk instance from your source? If there is any like HF before indexers then you must add that props.conf there as it has effected only in 1st full ... See more...
Hi Are you sure that indexers are the first full splunk instance from your source? If there is any like HF before indexers then you must add that props.conf there as it has effected only in 1st full splunk instance. r. Ismo
Hi a nice place to test regex is regex101.com. https://regex101.com/r/5maP5V/1 here is one example how this can achieve. | rex field=msg_old "\b(?<msg_keyword>full)\b" If you want select only eve... See more...
Hi a nice place to test regex is regex101.com. https://regex101.com/r/5maP5V/1 here is one example how this can achieve. | rex field=msg_old "\b(?<msg_keyword>full)\b" If you want select only events which have word full in field msg_old then you should try | regex msg_old = "\bfull\b"  r. Ismo
As the error message says in your screenshot, Configure the universal forwarder as a Deployment Client to your Splunk server.   1. Enable Deployment Client on the Universal Forwarder First, log in... See more...
As the error message says in your screenshot, Configure the universal forwarder as a Deployment Client to your Splunk server.   1. Enable Deployment Client on the Universal Forwarder First, log in to the server where the Universal Forwarder is installed. 2. Create a Deployment Client Configuration Edit or create the deploymentclient.conf file in the following path: $SPLUNK_HOME/etc/system/local/deploymentclient.conf Add the following configuration: [deployment-client] # Enable the deployment client disabled = false [target-broker:deploymentServer] # Specify the IP address or hostname and port of the Deployment Server targetUri = <deployment_server_ip>:<deployment_server_port> <deployment_server_ip>: IP address or hostname of the Splunk Deployment Server. <deployment_server_port>: The port configured for the Deployment Server (default is 8089). For example: [deployment-client] disabled = false [target-broker:deploymentServer] targetUri = 192.168.1.100:8089 3. Restart the Splunk Universal Forwarder To apply the changes, restart the Splunk Universal Forwarder: $SPLUNK_HOME/bin/splunk restart 4. Verify the Deployment Client Connection on the Deployment Server On the Splunk Deployment Server, go to: Settings > Forwarder Management. Under Clients, you should see the new Universal Forwarder listed as a deployment client. ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
Hi currently splunk haven't this kind of feature (e.g. sudo or run as in windows). There is one item in ideas.splunk.com https://ideas.splunk.com/ideas/E-I-15 which is not for exactly for this but I... See more...
Hi currently splunk haven't this kind of feature (e.g. sudo or run as in windows). There is one item in ideas.splunk.com https://ideas.splunk.com/ideas/E-I-15 which is not for exactly for this but I think it could be usable, if Splunk made decisions to do it. Currently the only way to fulfill this requirement is to create additional user, but as you are using SSO it generates it's own issues.... r. Ismo
Hi it seems that you have wrong cloud HEC endpoint. You should use https://http-inputs-<your stack>.splunkcloud.com/<endpoint>. See more here Send data to HTTP Event Collector There are some diff... See more...
Hi it seems that you have wrong cloud HEC endpoint. You should use https://http-inputs-<your stack>.splunkcloud.com/<endpoint>. See more here Send data to HTTP Event Collector There are some differences based on where and which experience your Cloud Stack is/has. r. Ismo
Hello everyone, I have set up my Splunk server and Splunk forwarder. When I explore the settings, I can see one host as shown in the image. However, when I try to add data from the Add Data section,... See more...
Hello everyone, I have set up my Splunk server and Splunk forwarder. When I explore the settings, I can see one host as shown in the image. However, when I try to add data from the Add Data section, I get an error like in the other image. Can you help me resolve this issue?  
Hi as @richgalloway said you must test your python with command "splunk cmd python <..../bin/your script>" If you are using "python <..../bin/your script>" then it use wrong python version. Anyhow y... See more...
Hi as @richgalloway said you must test your python with command "splunk cmd python <..../bin/your script>" If you are using "python <..../bin/your script>" then it use wrong python version. Anyhow you should find hints from _internal logs why it didn't work. There are couple of presentations how you can do development with splunk, python and vscode. https://www.splunk.com/en_us/blog/it/splunk-enterprise-visual-studio-code-better-together.html https://community.splunk.com/t5/All-Apps-and-Add-ons/How-do-I-debug-Python-code-running-in-Splunk-Enterprise/m-p/629355 https://conf.splunk.com/files/2022/slides/DEV1127C.pdf There are also some other .conf presentations about this same area. r. Ismo
As others already said, it's obviously that timestamp extractions are not working correctly.  If you could get someone to check this from MC (Monitoring Console) side there should be an answer for a... See more...
As others already said, it's obviously that timestamp extractions are not working correctly.  If you could get someone to check this from MC (Monitoring Console) side there should be an answer for a reason. Settings -> MC Indexing -> Inputs -> Data Quality There are some selections to try to find errors. Then just click those error counts and it will open you query which shows more information about that issue. You could also modify that query to get more information about that issue. Until you can get someone to look those it's not possible to be sure the real reason which a behind this. r. Ismo
Hi You could try MC (monitoring console) to look those possible errors in ingestion phase. Settings -> MC Indexing -> Inputs -> Data Quality There are some selections to try to find errors. Then ... See more...
Hi You could try MC (monitoring console) to look those possible errors in ingestion phase. Settings -> MC Indexing -> Inputs -> Data Quality There are some selections to try to find errors. Then just click those error counts and it will open you query which shows more information about that issue. You could also modify that query to get more information about that issue. r. Ismo
I am having two index( index A and index B). Here I need to measure response time of topup of prepaid or postpaid number with help of transaction ID. From index A I can filter where the transaction ... See more...
I am having two index( index A and index B). Here I need to measure response time of topup of prepaid or postpaid number with help of transaction ID. From index A I can filter where the transaction is prepaid or postpaid,index A contains(customer ID, Type(Prepaid or Postpaid). In indexB we have two logs one is request log and other is response log. With help of customer ID from Index A I need to find the transaction ID from Request log since customer ID is not available in response log. Once we get the transaction ID, we need to substract the time stamp (Response log time- Request log time). Index A. Log pattern---> _timestamp, customerID,type Index B----> contains request and response log. Request log pattern---> timestamp, transactionID, customer ID Response log pattern--->timestamp, transactionID,status.   Method to measure --> From index A we need to get customerID and then go to index B to find out the transaction ID from Request log. With help of transactionID need to subtract the timestamp between response and request log from index B Please help us how we can proceed,in SPL query.
Any news on PHP8.3 support please ? Also will ARM cpu support be coming ? Thanks.
This is the query i am using in my search. I need my output into mutiple rows.(snippet provided)   index=mail "*tanium*" |spath body |rex field=body max_match=0 "\"(?<Computer_name>.*)\",\"ACN" |... See more...
This is the query i am using in my search. I need my output into mutiple rows.(snippet provided)   index=mail "*tanium*" |spath body |rex field=body max_match=0 "\"(?<Computer_name>.*)\",\"ACN" |rex field=body max_match=0 "\"(?<Computer_name1>.*)\",\"\[n" |rex field=Computer_name1 max_match=0 "(?<Computer_name2>.*)\",\"\[n" |rex field=body max_match=0 "\,(?<Patch_List_Name1>.*)\"\[" |rex field=Patch_List_Name1 max_match=0 "\"(?<Patch_List_Name>.*)\",\"" |rex field=Patch_List_Name1 max_match=0 "\",\""(?<Compliance_status>.*)\" |eval Computer_name=mvappend(Computer_name,Computer_name2) |table Computer_name Compliance_status Patch_List_Name
Please just post your current query inside code block "</>" button when you write your post. Then mockup what and how you want too see the result. One picture is usually better than thousand words.
Hi at least some older splunk versions (e.g. 7.3.x) there was (probably) bug which leads this kind of behavior when you are using REST with a increased storage areas like increased filesystem. The f... See more...
Hi at least some older splunk versions (e.g. 7.3.x) there was (probably) bug which leads this kind of behavior when you are using REST with a increased storage areas like increased filesystem. The fix was restart splunkd. r. Ismo
Did you check the results which the initial rest command yields?