All Topics

Top

All Topics

Hi,  we implemented persistent queues to catch high latencies between the locations.  E.g. Connection is down for some minutes / hours and the firewall is not buffering logs itself so it gets lost ... See more...
Hi,  we implemented persistent queues to catch high latencies between the locations.  E.g. Connection is down for some minutes / hours and the firewall is not buffering logs itself so it gets lost if queues are filled up..  Is there a way to monitor the persistent queues? Fill ration or other metrics?    I dont want to build custom scripts collecting bash information.  
共有ディスクを利用したクラスタ構成のサーバで共有ディスク上のログをuniversal forwarderでSplunkCloudへ転送しアラートにより特定条件のメッセージ監視をしようとしています。 通常運用時は問題ありませんが、クラスタでフェールオーバーが発生した場合に待機系サーバのuniversal forwarderでは共有ディスク上のログを先頭から読み込んでしまい、Splunk... See more...
共有ディスクを利用したクラスタ構成のサーバで共有ディスク上のログをuniversal forwarderでSplunkCloudへ転送しアラートにより特定条件のメッセージ監視をしようとしています。 通常運用時は問題ありませんが、クラスタでフェールオーバーが発生した場合に待機系サーバのuniversal forwarderでは共有ディスク上のログを先頭から読み込んでしまい、SplunkCloud上ではログの重複が発生してしまいます。 このような共有ディスク構成でのクラスタ構成においてフェールオーバー後に転送対象ログを先頭から転送しないなどSplunkCloud上でログが重複しないような工夫をされた方はいらっしゃいませんでしょうか。 I am trying to forward the logs on the shared disk to Splunk Cloud using a universal forwarder on a server in a cluster configuration using a shared disk, and use alerts to monitor messages under specific conditions. There is no problem during normal operation, but if a failover occurs in the cluster, the universal forwarder on the standby server will read the logs on the shared disk from the beginning, resulting in duplicate logs on Splunk Cloud. In a cluster configuration with a shared disk configuration like this, has anyone taken any measures to prevent logs from being duplicated on SplunkCloud, such as not transferring logs to be transferred from the beginning after a failover? *Translated by the Splunk Community Team*
Want to add a text box on a dashboard  e.g--> search |table field1 field2 field3 Here I want to put a textbox on field3 on the pannel it self so that it can be filter with the field3 value. ... See more...
Want to add a text box on a dashboard  e.g--> search |table field1 field2 field3 Here I want to put a textbox on field3 on the pannel it self so that it can be filter with the field3 value.  
Hello I have a table with 7 columns, some of them calculated from lookup I want to count the total of one of the columns and then calculate percentage of other column based on the total I tried th... See more...
Hello I have a table with 7 columns, some of them calculated from lookup I want to count the total of one of the columns and then calculate percentage of other column based on the total I tried this but im getting 0 results   | stats count by SERVERS | stats count(SERVERS) by Domain as "Domain_Count" | eventstats sum(count) as Total_Servers    What can I do ? Thanks
Hi, I need to be able to see what inputs people are entering into dashboard panels (i.e. what filters they are applying or searching for). So far I have used the _audit and _internal indexes to ... See more...
Hi, I need to be able to see what inputs people are entering into dashboard panels (i.e. what filters they are applying or searching for). So far I have used the _audit and _internal indexes to be able to see which users have accessed which saved-searches on each dashboard, but have not been able to identify the input that they have entered. I am hoping to create a dashboard table for this. Thanks.
I am working to create a use case to detect account created and deleted within short period of time Could you please give a simple example how connected true/false will affect results of transaction... See more...
I am working to create a use case to detect account created and deleted within short period of time Could you please give a simple example how connected true/false will affect results of transaction command. I already referred previous answer but didnt understand the explanation. Addionally also explain what is the affect of connected=true/false in the below query and also what is the best practice. sourcetype=wineventlog (EventCode=4726 OR EventCode=4720)  | transaction user maxspan=240m startswith="EventCode=4720" endswith="EventCode=4726" connected=false| table Time, dest, EventCode, user, src_user, Account_Domain @Ledion_Bitincka   @richgalloway 
How do I rename/conjoin/remove the space between the field "ThreeDSecureResult" and "description"? The value is coming up as 'description' rather than 'Failed' when I try to table it on Splunk. <Thr... See more...
How do I rename/conjoin/remove the space between the field "ThreeDSecureResult" and "description"? The value is coming up as 'description' rather than 'Failed' when I try to table it on Splunk. <ThreeDSecureResult description="Failed"/>
Would like to run a scan on backend and look for "*M5*-CLDB" or any combination of M5 and CLDB. We have Splunk Distributed environment, indexer and search head clusters. Saved searches, lookups, Dash... See more...
Would like to run a scan on backend and look for "*M5*-CLDB" or any combination of M5 and CLDB. We have Splunk Distributed environment, indexer and search head clusters. Saved searches, lookups, Dashboards which needs to be modified due to the cluster name change. Could someone share your thoughts on the same.
how do i make the domain entity to be multiselect with value1: material,value 2:sm,value 3:all then my inputToken &outputtoken are different for each data entity selection,,,how can i pass two tok... See more...
how do i make the domain entity to be multiselect with value1: material,value 2:sm,value 3:all then my inputToken &outputtoken are different for each data entity selection,,,how can i pass two token if used in multiselect to two different search query <row> <panel> <input type="dropdown" token="tokEnvironment" searchWhenChanged="true"> <label>Domain</label> <choice value="goodsdevelopment">goodsdevelopment</choice> <choice value="materialdomain">materialdomain</choice> <choice value="costsummary">costsummary</choice> <change> <unset token="tokSystem"></unset> <unset token="form.tokSystem"></unset> </change> <default></default> </input> <input type="dropdown" token="tokSystem" searchWhenChanged="true">         <label>Domain Entity</label>         <fieldForLabel>$tokEnvironment$</fieldForLabel>         <fieldForValue>$tokEnvironment$</fieldForValue>         <search>           <query>| makeresults   | eval goodsdevelopment="airbag",materialdomain="material,sm",costsummary="costing"</query>         </search>         <change>           <condition match="$label$==&quot;airbag&quot;">             <set token="inputToken">airbagSizeScheduling</set>             <set token="outputToken">goodsdevelopment</set>           </condition>           <condition match="$label$==&quot;costing&quot;">             <set token="inputToken">costSummary</set>             <set token="outputToken">costing</set>           </condition>           <condition match="$label$==&quot;material&quot;">             <set token="inputToken">material</set>             <set token="outputToken">md</set>           </condition>         </change>       </input>
Hi Splunkers, I need to send 50 reports out of a splunk query to my team members every month. Currently I’m taking them manually and distributing them.  we are planning to automate this proce... See more...
Hi Splunkers, I need to send 50 reports out of a splunk query to my team members every month. Currently I’m taking them manually and distributing them.  we are planning to automate this process by sending them to Jira and open tickets with the necessary data and assign them to people.  What should I do in order to achieve this? Is there a proper Splunk Jira add-on in Splunk base? Any recommendations would be helpful. TIA
As the title says, when troubleshooting data ingestion on search heads e.g. running an SPL to check if certain data is landing to that index or searching through an addon like aws to check its diagno... See more...
As the title says, when troubleshooting data ingestion on search heads e.g. running an SPL to check if certain data is landing to that index or searching through an addon like aws to check its diagnostic logs on the HF. For tests like these, which search heads are best to use, monitoring console or a regular search head or an SHC ?
Hey I have the following query:   ``` | makeresults | eval prediction_str_body="[{'stringOutput':'Alpha','doubleOutput':0.52},{'stringOutput':'Beta','doubleOutput':0.48}]" ```   But no matter w... See more...
Hey I have the following query:   ``` | makeresults | eval prediction_str_body="[{'stringOutput':'Alpha','doubleOutput':0.52},{'stringOutput':'Beta','doubleOutput':0.48}]" ```   But no matter what I do, I can't seem to extract each element of the list and turn it into it's own event. I'd ideally like a table afterwards of the sum of each value: Alpha: 0.52 Beta: 0.48 For all rows. Thanks!
Hello, How to pre-calculate and search historical data from correlation between index and CSV/DB lookup? For example: From vulnerability_index, there are 100k of IP addresses scanned in 24 hours... See more...
Hello, How to pre-calculate and search historical data from correlation between index and CSV/DB lookup? For example: From vulnerability_index, there are 100k of IP addresses scanned in 24 hours. When performing a lookup on CSV file from this index, only 2 IPs matches, but every time a search is performed in dashboard, it compares 100k IPs with 2 IPs. How do we pre-calculate the search and store the data, so every time a search is performed on a dashboard, it only search for the historical data and it does not have to compare 100k IPs with IPs? Thank you in advanced for your help | index=vulnerability_index | table ip_address, vulnerability, score ip_address         vulnerability                        score 192.168.1.1 SQL Injection 9 192.168.1.1 OpenSSL 7 192.168.1.2 Cross Site-Scripting       8 192.168.1.2 DNS 5 x.x.x.x   ... total IP:100k       company.csv ip_address       company location 192.168.1.1 Comp-A        Loc-A 192.168.1.2 Comp-B Loc-B   | lookup company.csv ip_address as ip_address OUTPUTNEW ip_address, company, location ip_address vulnerability score company location 192.168.1.1 SQL Injection 9 Comp-A Loc-A 192.168.1.1 OpenSSL 7 Comp-A Loc-A 192.168.1.2 Cross Site-Scripting 8 Comp-B Loc-B 192.168.1.2 DNS 5 Comp-B Loc-B  
Hello, recently I've added a new firewall as a source to the splunk solution at work but I can't figure why my LINE_BREAKER thing is not working. I've deployed the thing both at the heavy forwarder a... See more...
Hello, recently I've added a new firewall as a source to the splunk solution at work but I can't figure why my LINE_BREAKER thing is not working. I've deployed the thing both at the heavy forwarder and the indexers but still can't make it work. Logs are coming in like this:   Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:27 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347491321753 tz="-0500" logid="0001000014" type="traffic" subtype="local" level="notice" vd="vdom1" srcip=xx.xx.xx.xx srcport=3465 srcintf="wan_1" srcintfrole="undefined" dstip=xx.xx.xx.xx dstport=443 dstintf="client" dstintfrole="undefined" srccountry="Netherlands" dstcountry="Peru" sessionid=1290227282 proto=6 action="close" policyid=0 policytype="local-in-policy" service="HTTPS" trandisp="noop" app="HTTPS" duration=9 sentbyte=1277 rcvdbyte=8294 sentpkt=11 rcvdpkt=12 appcat="unscanned" Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:28 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347381319603 tz="-0500" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom2" srcip=143.137.146.130 srcport=33550 srcintf="wan_2" srcintfrole="undefined" dstip=xx.xx.xx.xx dstport=443 dstintf="3050" dstintfrole="lan" srccountry="Peru" dstcountry="United States" sessionid=1290232934 proto=6 action="close" policyid=24 policytype="policy" poluuid="12c55036-3d5b-51ee-9360-c36a034ab600" policyname="INTERNET_VDOM" service="HTTPS" trandisp="noop" duration=2 sentbyte=2370 rcvdbyte=5826 sentpkt=12 rcvdpkt=11 appcat="unscanned" Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:28 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347443046437 tz="-0500" logid="0000000020" type="traffic" subtype="forward" level="notice" vd="vdom2" srcip=xx.xx.xx.xx srcport=52777 srcintf="wan_2" srcintfrole="undefined" dstip=xx.xx.xx.xx dstport=443 dstintf="3050" dstintfrole="lan" srccountry="Peru" dstcountry="Peru" sessionid=1289825875 proto=6 action="accept" policyid=24 policytype="policy" poluuid="12c55036-3d5b-51ee-9360-c36a034ab600" policyname="INTERNET_VDOM" service="HTTPS" trandisp="noop" duration=500 sentbyte=1517 rcvdbyte=1172 sentpkt=8 rcvdpkt=7 appcat="unscanned" sentdelta=1517 rcvddelta=1172 Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:28 devname="fw_name_1" devid="fortigate_id_1" eventtime=1695157347481317830 tz="-0500" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom2" srcip=xx.xx.xx.xx srcport=18191 srcintf="3050" srcintfrole="lan" dstip=xx.xx.xx.xx dstport=443 dstintf="wan_2" dstintfrole="undefined" srccountry="Peru" dstcountry="Peru" sessionid=1290224387 proto=6 action="timeout" policyid=21 policytype="policy" poluuid="ab285ae0-3d5a-51ee-dce1-3f4aec1e32dc" policyname="PUBLICACION_VDOM" service="HTTPS" trandisp="noop" duration=13 sentbyte=180 rcvdbyte=0 sentpkt=3 rcvdpkt=0 appcat="unscanned" Sep 19 16:02:28 host_ip date=2023-09-19 time=16:02:27 devname="fw_name_2" devid="fortigate_id_2" eventtime=1695157346792901761 tz="-0500" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom3" srcip=xx.xx.xx.xx srcport=47767 srcintf="3006" srcintfrole="lan" dstip=xx.xx.xx.xx dstport=8580 dstintf="wan_2" dstintfrole="undefined" srccountry="United States" dstcountry="Peru" sessionid=3499129086 proto=6 action="timeout" policyid=18 policytype="policy" poluuid="9cba23b2-3dfa-51ee-847f-49862ff000c0" policyname="PUBLICACION_VDOM" service="tcp/8580" trandisp="noop" duration=10 sentbyte=40 rcvdbyte=0 sentpkt=1 rcvdpkt=0 appcat="unscanned" srchwvendor="Cisco" devtype="Router" mastersrcmac="xxxxxxxxxxxxxxx" srcmac="xxxxxxxxxxxxxxx" srcserver=0   And the configuration I added into props.conf is the following:   [host::host_ip] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(?=\w{3}\s+\d{1,2}\s\d{2}\:\d{2}\:\d{2}) TIME_PREFIX = eventtime= TIME_FORMAT = %b %d %H:%M:%S   The format is similar to the configuration applied to similar sources so I can't figure out why it isn't working. I'd appreciate any kind of insight you guys could bring. Thanks in advance!    
am trying to add new input in the inputs.conf which is a network shared folder   to forward some logs from a device where it has no forward logs option  when viewing splunkd.log i see that the user... See more...
am trying to add new input in the inputs.conf which is a network shared folder   to forward some logs from a device where it has no forward logs option  when viewing splunkd.log i see that the username and password are wrong 09-19-2023 21:59:46.953 +0300 WARN FilesystemChangeWatcher [10812 MainTailingThread] - error getting attributes of path "\\192.168.1.142\df\InvalidPasswordAttempts.log": The user name or password is incorrect. although i can access the shared folder with a browser on the UF host with no problems  by the way am using my Microsoft account to login to the windows11 where the UF resides any suggestions ?   thanks 
Python 2.7, the last release of Python 2, reached End of Life back on January 1, 2020. As part of our larger effort to stay current on the latest libraries and packages, Splunk announced our Python 3... See more...
Python 2.7, the last release of Python 2, reached End of Life back on January 1, 2020. As part of our larger effort to stay current on the latest libraries and packages, Splunk announced our Python 3 migration strategy in October 2018 and again in July 2019. Splunk has released versions of Splunk Cloud Platform providing a Python 3 runtime since the release of Splunk Enterprise 8.0, in October 2019. As the Splunk Cloud administrator, you can use the Splunk Platform Upgrade Readiness App to check compatibility with Python 3. If your Splunk Cloud Platform deployment does contain outdated Python code, you need to upgrade it to be compatible with Python 3 as soon as possible. You might be wondering: what is the easiest way to migrate my deployment and Splunk apps to Python 3.7 while minimizing interruptions? We highly recommend you to utilize the Splunk Platform Upgrade Readiness App (URA) we provide to scan your deployed apps for any components that might be impacted by migration to Python 3 and steps you can take to prepare. If the URA determines that there is any app on your stack that might not be compatible with Python 3.7, the table below outlines the actions you should take:    New app version compatible with Python 3.7 is available No new app version or new version is not compatible with Python 3.7  App comes from Splunkbase Update the app to the latest version compatible with Python 3.7 1. Take responsibility for updating the app as a private app, and accept that the app may no longer function after Splunk performs the platform update 2. Uninstall or disable the app App does NOT come from Splunkbase Take responsibility for updating the app as a private app 1. Take responsibility for updating the app as a private app, and accept that the app may no longer function after Splunk performs the platform update 2. Uninstall or disable the app   For Developers, Splunk’s AppInspect API  can help detect issues that would prevent your app from being compatible with Splunk Cloud. In addition, we encourage you to use the latest Splunk SDK for Python, or at least v1.6.6 , which is cross-compatible with Python 2 and Python 3.7. For migration preparation for Splunk Enterprise, see Python Development in Splunk Enterprise for more details.  If there are ML models existing within your Splunk solutions, you must update ML models to support Python 3 as well. For more information, see Splunk IT Service Intelligence and Splunk Machine Learning Toolkit. If you have any questions, reach out to us at python27-eol@splunk.com. Best, Splunk Python Migration Team
The following works fine in the Search app:   ... | makemv delim=";" hashes | ...   The equivalent curl call   curl ... -d search="search ... | makemv delim=\";\" hashes | ..." -d output_... See more...
The following works fine in the Search app:   ... | makemv delim=";" hashes | ...   The equivalent curl call   curl ... -d search="search ... | makemv delim=\";\" hashes | ..." -d output_mode=csv   fails with an "Unbalanced quotes" error.  Delimiters other than ; work fine. I tried to escape the semicolon, use Unicode values, replace the string with a variable, all to no avail. Any suggestions?
I need to break out log data from two separate multi-value fields into single value fields. Here is what data looks like:  Each line of data from "participants{}.object_value" corresponds to the... See more...
I need to break out log data from two separate multi-value fields into single value fields. Here is what data looks like:  Each line of data from "participants{}.object_value" corresponds to the line in "participants{}.role" and I would like named victims and offender fields.  I dont understand how to use the mv commands to expand the data from two different fields and then combine them into new fields.
Hi All, i have read similar posts but none that will get me to an answer. My log entry is this; 2023-09-19 16:17:01,306 <OnAirSchedule Service="9008" Status="ON" StartDateTime="59025.5249306"/> ... See more...
Hi All, i have read similar posts but none that will get me to an answer. My log entry is this; 2023-09-19 16:17:01,306 <OnAirSchedule Service="9008" Status="ON" StartDateTime="59025.5249306"/> The StartDateTime is in MJD and i would like to get into human readable format. Below is my search, some regex to start with then the conversion.         | rex "<OnAirSchedule\sService=\"(?<SERVICE>[0-9]+)\"\sStatus\=\"(?<STATUS>.+)\"\sStartDateTime\=\"(?<START_DATE>.+)\"\/\>" | eval jdate=START_DATE,epoch_date=strptime(jdate,"%y%j"),date=strftime(epoch_date,"%Y-%m-%d %H:%M:%S.%1N") | table _time SI_SERVICE_KEY STATUS START_DATE epoch_date date          which was a solution in another question, however i get the date time  2059-01-25 00:00:00.0 I have tried variances of the %y%j to %y.%j and %Y.%j, however these just seem to deal with the date as Julian Date, rather than using the values after the decimal point. This page seems to point to something i am after but it doesnt deal with the full MJD. https://community.splunk.com/t5/Getting-Data-In/Splunk-recognizing-Julian-Date-and-Elapsed-Seconds/m-p/72709 any advice greatly welcomed.
Hello All I need to send a request to Splunk API from a Linux server but the Curl is complaining because the search argument is too long (could be up to 500000 chars). my question is: how we can us... See more...
Hello All I need to send a request to Splunk API from a Linux server but the Curl is complaining because the search argument is too long (could be up to 500000 chars). my question is: how we can use @myFile.spl to query splunk api? This is what I have done so far but no luck yet   curl --noproxy '*' -k -H "Authorization: Splunk myToken" https://mySearchHead:8089/servicesNS/admin/search/search/jobs/export?output_mode=json -d search=`echo $myVar`  error  Argument list too long curl --noproxy '*' -k -H "Authorization: Splunk myToken" https://mySearchHead:8089/servicesNS/admin/search/search/jobs/export?output_mode=json -d  @query2.spl  (Format1 in query2.spl file--> "search= | search index=myIndex ...."    up to 500000 char) error {"messages":[{"type":"FATAL","text":"Empty search."}]} curl --noproxy '*' -k -H "Authorization: Splunk myToken"  https://mySearchHead:8089/servicesNS/admin/search/search/jobs/export?output_mode=json -d  @query2.spl  (Format2 in query2.spl file --> search= "| search index=myIndex ...."    up to 500000 char -- difference with 3 is quotes position) error {"messages":[{"type":"ERROR","text":"Error in 'SearchParser': Missing a search command before '\"'. Error at position '0' of search query '\"| search index...." curl --noproxy '*' -k -H "Authorization: Splunk myToken" https://mySearchHead:8089/servicesNS/admin/search/search/jobs/export?output_mode=json -d search=@query2.spl   (Format2 in query2.spl file --> "| search index=myIndex ...."    up to 500000 char -- difference with 3 is quotes position) error {"messages":[{"type":"ERROR","text":"Error in 'SearchParser': Missing a search command before '@'. Error at position '0' of search query '@query2.spl'.","help":""}]}