All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This XML file does not appear to have any style information associated with it. The document tree is shown below. <response> <messages> <msg type="ERROR">Unauthorized</msg> </messages> </response>
Hi What is the different between Extract fields in query with rex or in config file. Pros and cons? how about performance?   Thanks,
I want to get the result of the next line of the log message when I encounter  a key word. Example log: ----error in checking status-------- ----Person Name: abcd, Status=active--------- -----Che... See more...
I want to get the result of the next line of the log message when I encounter  a key word. Example log: ----error in checking status-------- ----Person Name: abcd, Status=active--------- -----Check for Status------ ------success : true-------- -----Start  Processing XXX---------- ----Person Name: abcd, Status=active--------- -----Check for Status------ ------success : true-------- -----Start  Processing XXX---------- ----Person Name: abcd, address:yzgj--------- -----Check for Person------ ------success : true-------- -----Start  Processing XXX----------   In the above log I want to  capture the person name  after the  "Check for Person". The log is indexed by _time.  I want to display the following result:   _time             Process                           Person Name                                                        XXX                                       abcd I don't want to use map or transactions as those are expensive as there are lot of events. Thank you for the help.  
Hi, communities, I am doing a calculation or eval command.       | eval dormancy=if(last_login="(never)",round((now()-strptime(created,"%Y-%m-%d"))/86400),round((now()-strptime(last_login,"%Y-%m-... See more...
Hi, communities, I am doing a calculation or eval command.       | eval dormancy=if(last_login="(never)",round((now()-strptime(created,"%Y-%m-%d"))/86400),round((now()-strptime(last_login,"%Y-%m-%d"))/86400))     The above calculate dormancy number correctly but, soon as I change the following code:     | eval dormancy=if(last_login="(never)",round((now()-strptime(created,"%Y/%m/%d"))/86400),round((now()-strptime(last_login,"%Y/%m/%d"))/86400))     from "-" to "/" strptime doesn't calculate the dormancy days.  Is this limit of strptime or am I doing something wrong?
I'm migrating my Splunk Instance from an outdated OS. I want to increase the buffer size for my Splunk forwarder so that it can withstand all the logs when the receiver/ Indexer is down. We are using... See more...
I'm migrating my Splunk Instance from an outdated OS. I want to increase the buffer size for my Splunk forwarder so that it can withstand all the logs when the receiver/ Indexer is down. We are using Splunk version 6.6.0, I'm unable to find relevant documentation for referring to the configuration file changes.
I'm aiming to develop a Playbook in SOAR Phantom to automate the deletion of containers(using label) older than one week. Can you guide me on which App to utilize for container management and how to ... See more...
I'm aiming to develop a Playbook in SOAR Phantom to automate the deletion of containers(using label) older than one week. Can you guide me on which App to utilize for container management and how to implement appropriate filters in the Action Block?
Hello I have 2 searches that return message ids given certain field values. The first search index=messages* MSG_src="AAAAA" MSG_DOMAIN="BBBBBB" MSG_TYPE="CC *" | rename MSGID AS MSGID1 The s... See more...
Hello I have 2 searches that return message ids given certain field values. The first search index=messages* MSG_src="AAAAA" MSG_DOMAIN="BBBBBB" MSG_TYPE="CC *" | rename MSGID AS MSGID1 The second search index=messages* MSG_src="CCCCCC", MSG_DOMAIN="DDDDDDD", MSG_TYPE="Workflow Start" | rex field=_raw "<pmt>(?<pmt>.*)</pmt>" | rex field=_raw <EventId>(?<MSGID1>.*)</EventId> | search pmt=EEEEEEE The results from the second search could come in up to an hour after the results from the first search. It is not an issue unless it takes over an hour. How can I account for this time delay so I can accurately alert if the span is longer than an hour? Thanks for the help!
In a part of splunk soar (phantom) playbook I would like, in some cases, to send a syslog msg to a remote syslog server. I did not find any well-known app which can help me, so I figure out creating... See more...
In a part of splunk soar (phantom) playbook I would like, in some cases, to send a syslog msg to a remote syslog server. I did not find any well-known app which can help me, so I figure out creating it as a (python) code  via "Python Playbook Editor". BUT somehow using the default socket library and the connect + send functions did not work. Listening to all network interfaces did not show any attempt creating the tcp flow to the destination. Could someone help me or show me how can I can open a tcp connection in splunk SOAR   
is It possible to do in Splunk. and What type of logs I need to have in Splunk?
Hi, i recently changes a SQL query in Splunk db connect to one of the dashboard. the query ran but i don't see the dashboard getting reflected to new data. as i was checking i see the index did n... See more...
Hi, i recently changes a SQL query in Splunk db connect to one of the dashboard. the query ran but i don't see the dashboard getting reflected to new data. as i was checking i see the index did not refresh after the new query is implemented. The last event of the index remians the day i changed the query. the new query had two new columns but i dont see it getting reflected. can anyone please help me with this. Its bit urgent !!!!!!!!!
Hello, What are the best methods to ingest Datadog Log and Metrics Data into Splunk Cloud/HF?  We have a requirement to fetch datadog dashboard and populate it to Splunk Dashboard. Thank you. Reg... See more...
Hello, What are the best methods to ingest Datadog Log and Metrics Data into Splunk Cloud/HF?  We have a requirement to fetch datadog dashboard and populate it to Splunk Dashboard. Thank you. Regards, Madhav
Hi, So i have below base query : | inputlookup abc.csv where DECOMMISSIONED=N | fields DATABASE DB_VERSION APP_NAME ACTIVE_DC HOST_NAME DB_ROLE COMPLIANCE_FLAG PII PCI SOX | rename DATABASE as Data... See more...
Hi, So i have below base query : | inputlookup abc.csv where DECOMMISSIONED=N | fields DATABASE DB_VERSION APP_NAME ACTIVE_DC HOST_NAME DB_ROLE COMPLIANCE_FLAG PII PCI SOX | rename DATABASE as Database | join type=left Database [| metadata type=hosts index=data | fields host, lastTime, totalCount | eval Database=Upper(host)| search totalCount&gt;1 | stats max(lastTime) as lastTime, last(totalCount) as totalCount by Database | eval age=round((now()-lastTime)/3600,1) | eval Status=case( lastTime&gt;(now()-(3600*2)),"Low", lastTime&lt;(now()-(3600*2+1)) AND lastTime&gt;(now()-(3600*8)) ,"Medium", lastTime&lt;(now()-(3600*8+1)) AND lastTime&gt;(now()-(3600*24)),"High", 1=1,"Critical") | convert ctime(lastTime) timeformat="%d-%m-%Y %H:%M:%S" | eval Reference="SPL"] | rex mode=sed field=HOST_NAME "s/\..*$//g" | fields Database Reference DB_VERSION APP_NAME ACTIVE_DC HOST_NAME Status DB_ROLE COMPLIANCE_FLAG | fillnull value=Missing Status | fillnull value=Null Now i need to add field let say Privacy with PII PCI and SOX as filter but i don't need the value of these fields to be come as filter in Privacy filed and reflect same in summary tab  <row> <panel> <table> <title>Summary</title> <search base="base"> <query>| search APP_NAME="$application$" Database="$database$" HOST_NAME="$host$" DB_VERSION="$version$" Status="$status$" COMPLIANCE_FLAG="$compliance$" Privacy="$privacyFilter$" | eval StatusSort=case(Status="Missing","1",Status="Critical","2",Status="High","3",Status="Medium","4",Status="Low","5") | sort StatusSort | table APP_NAME Database HOST_NAME DB_VERSION ACTIVE_DC Status DB_ROLE COMPLIANCE_FLAG PII PCI SOX | rename APP_NAME as Application, DB_VERSION as Version, ACTIVE_DC as DC, HOST_NAME as HOST</query> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="number" field="FileSize"> <option name="precision">0</option> </format> <format type="color" field="Status"> <colorPalette type="map">{"Missing":#DC4E41,"Critical":#F1813F,"High":#F8BE34,"Medium":#62B3B2,"Low":#53A051}</colorPalette> </format> </table> </panel> </row> </form>   can someone help how i can get i added this panel <!-- New Privacy Filter Panel --> <input type="multiselect" token="privacyFilter" searchWhenChanged="true"> <label>Privacy</label> <choice value="*">All</choice> <choice value="PII">PII</choice> <choice value="PCI">PCI</choice> <choice value="SOX">SOX</choice> <fieldForLabel>Privacy</fieldForLabel> <fieldForValue>Privacy</fieldForValue> <default>*</default> <initialValue>*</initialValue> </input> </fieldset> and this <row> <panel> <table> <title>Summary</title> <search base="base"> <query>| search APP_NAME="$application$" Database="$database$" HOST_NAME="$host$" DB_VERSION="$version$" Status="$status$" COMPLIANCE_FLAG="$compliance$" Privacy="$privacyFilter$" | eval StatusSort=case(Status="Missing","1",Status="Critical","2",Status="High","3",Status="Medium","4",Status="Low","5") | sort StatusSort | table APP_NAME Database HOST_NAME DB_VERSION ACTIVE_DC Status DB_ROLE COMPLIANCE_FLAG PII PCI SOX | rename APP_NAME as Application, DB_VERSION as Version, ACTIVE_DC as DC, HOST_NAME as HOST</query> </search> <option name="count">10</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="number" field="FileSize"> <option name="precision">0</option> </format> <format type="color" field="Status"> <colorPalette type="map">{"Missing":#DC4E41,"Critical":#F1813F,"High":#F8BE34,"Medium":#62B3B2,"Low":#53A051}</colorPalette> </format> </table> </panel> </row> </form>   but getting no result found 
We could see only 10 hosts in index=os sourcetype=cpu & index=os source=vmstat. We should get all the unix/linux hosts on the mentioned sourcetype & source. We are using this to generate high cpu uti... See more...
We could see only 10 hosts in index=os sourcetype=cpu & index=os source=vmstat. We should get all the unix/linux hosts on the mentioned sourcetype & source. We are using this to generate high cpu utilization, High memory utilization incidents. Like till August end we are able to see 100+ host for the mentioned source and sourcetype but after August we are not able to see 100+ host like we could see only 10.15,7  Please help me on this
i have added this file in monitoring to ingest data but data is not getting ingesting  log file path is /tmp/mountcheck.txt [monitor:///tmp/mount.txt] disabled = 0 index = Test_index sourcetype ... See more...
i have added this file in monitoring to ingest data but data is not getting ingesting  log file path is /tmp/mountcheck.txt [monitor:///tmp/mount.txt] disabled = 0 index = Test_index sourcetype =Test_sourcetype initCrcLen = 1024 crcSalt = "unique_salt_value"  
i have below stanza to ingest json data file and added in deployment server as below an in HF added props.conf file  initially  i have uploaded using splunk UI but getting events in one line [mon... See more...
i have below stanza to ingest json data file and added in deployment server as below an in HF added props.conf file  initially  i have uploaded using splunk UI but getting events in one line [monitor:///var/log/Netapp_testobject.json] disabled = false index = Test_index sourcetype = Test_sourcetype [Test_sourcetype] DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false LINE_BREAKER=([{}\,\s]+) NO_BINARY_CHECK=true CHARSET=UTF-8 EVENT_BREAKER=([{}\,\s]+) INDEXED_EXTRACTIONS=json KV_MODE=json TRUNCATE=0 json data looks like below: [ { "Name": "test name", "Description": "", "DNSHostname": "test name", "OperatingSystem": "NetApp Release 9.1", "WhenCreated": "2/13/2018 08:24:22 AM", "distinguishedName": "CN=test name,OU=NAS,OU=AVZ Special Purpose,DC=corp,DC=amvescap,DC=net" }, { "Name": "test name", "Description": "London DR smb FSX vserver", "DNSHostname": "test name", "OperatingSystem": "NetApp Release 9.13.0P4", "WhenCreated": "11/14/2023 08:43:36 AM", "distinguishedName": "CN=test name,OU=NAS,OU=AVZ Special Purpose,DC=corp,DC=amvescap,DC=net" } ]
Hi, We have created an application using splunk add on builder(https://apps.splunk.com/app/2962/). Created a python script for the alert action. while validating the application created(add-on) we ... See more...
Hi, We have created an application using splunk add on builder(https://apps.splunk.com/app/2962/). Created a python script for the alert action. while validating the application created(add-on) we are getting two errors rest of the test cases are passed.  Sharing the errors below- First Error: {"validation_id": "v_1703053121_88", "ta_name": "TA-testaddon", "rule_name": "Validate app certification", "category": "app_cert_validation", "ext_data": {"is_visible": true}, "message_id": "7004", "description": "Check that no files have *nix write permissions for all users (xx2, xx6, xx7). Splunk recommends 644 for all app files outside of the bin/ directory, 644 for scripts within the bin/ directory that are invoked using an interpreter (e.g. python my_script.py or sh my_script.sh), and 755 for scripts within the bin/ directory that are invoked directly (e.g. ./my_script.sh or ./my_script). Since appinspect 1.6.1, check that no files have nt write permissions for all users..", "sub_category": "Source code and binaries standards", "solution": "There are multiple errors for this check. Please check \"messages\" for details.", "messages": "[{\"result\": \"warning\", \"message\": \"Suppressed 813 failure messages\", \"message_filename\": null, \"message_line\": null}, {\"result\": \"failure\", \"message\": \"A posix world-writable file was found. File: bin/ta_testaddon/aob_py3/splunktalib/splunk_cluster.py\", \"message_filename\": null, \"message_line\": null}]", "severity": "Fatal", "status": "Fail", "validation_time": 1703053540}   Second Error: {"validation_id": "v_1703053679_83", "ta_name": "TA-testaddon", "rule_name": "Validate app certification", "category": "app_cert_validation", "ext_data": {"is_visible": true}, "message_id": "7002", "description": "Check that the dashboards in your app have a valid version attribute.", "sub_category": "jQuery vulnerabilities", "solution": "Change the version attribute in the root node of your Simple XML dashboard default/data/ui/views/home.xml to `<version=1.1>`. Earlier dashboard versions introduce security vulnerabilities into your apps and are not permitted in Splunk Cloud File: default/data/ui/views/home.xml", "severity": "Fatal", "status": "Fail", "validation_time": 1703053994} Kindly help in resolving this
Hello, is there a way to see the full URL of a particular slow Transaction Snapshot? I believe that some of the slow search requests in our system could be caused by a specific user input that is a p... See more...
Hello, is there a way to see the full URL of a particular slow Transaction Snapshot? I believe that some of the slow search requests in our system could be caused by a specific user input that is a part of the dynamic URL. But in the Transaction Snapshot dashboard (or in the Transaction Snapshot overview), I only see the aggregated short URL without a user input. Full URL example: https://host/Search/userInput Transaction Snapshot dashboard: Individual transaction overview: Also, I don't think I have access to the Analytics dashboard.
Hi, I have two clustered indexers which are now constantly generating crash logs in /splunk/var/log/splunk every few minutes and is unable to figure out the cause from the crash log or the error in ... See more...
Hi, I have two clustered indexers which are now constantly generating crash logs in /splunk/var/log/splunk every few minutes and is unable to figure out the cause from the crash log or the error in splunkd.log. Would anyone here be able to shed some light on this? Splunkd Error: WARN SearchProcessRunner [19356 PreforkedSearchesManager-0] - preforked process=0/38 status=killed, signum=6, signame="Aborted", coredump=1, uptime_sec=37.282768, stime_sec=19.850199, max_rss_kb=472688, vm_minor=902282, vm_major=37, fs_r_count=608, fs_w_count=50856, sched_vol=3413, sched_invol=10923 Contents of one of the crash.log: [build b6436b649711] 2023-11-02 11:39:40 Received fatal signal 6 (Aborted) on PID 23624. Cause: Signal sent by PID 23624 running under UID 1001. Crashing thread: BucketSummaryActorThread Registers: RIP: [0x00007F0D7E2DA387] gsignal + 55 (libc.so.6 + 0x36387) RDI: [0x0000000000005C48] RSI: [0x00000000000059CC] RBP: [0x0000000000000BE7] RSP: [0x00007F0CF85F2268] RAX: [0x0000000000000000] RBX: [0x0000562A9ADF7598] RCX: [0xFFFFFFFFFFFFFFFF] RDX: [0x0000000000000006] R8: [0x00007F0CF85FF700] R9: [0x00007F0D7E2F12CD] R10: [0x0000000000000008] R11: [0x0000000000000206] R12: [0x0000562A9AC0E070] R13: [0x0000562A9AF9CFB0] R14: [0x00007F0CF85F2420] R15: [0x00007F0CF806F260] EFL: [0x0000000000000206] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x0000000000000033] OLDMASK: [0x0000000000000000] Regards, Zijian
I try to make box plot graph using <viz> However, My code have this error, "Error in 'stats' command: The number of wildcards between field specifier '*' and rename specifier 'lowerquartile' do not... See more...
I try to make box plot graph using <viz> However, My code have this error, "Error in 'stats' command: The number of wildcards between field specifier '*' and rename specifier 'lowerquartile' do not match. Note: empty field specifiers implies all fields, e.g. sum() == sum(*)" and My code is this <viz type="viz_boxplot_app.boxplot"> <search> <query>index=idx_prd_analysis sourcetype="type:prd_analysis:result" corp="AUS" | eval total_time = End_time - Start_time | stats median, min, max, p25 AS lowerquartile, p75 AS upperquartile by total_time | eval iqr=upperquartile-lowerquartile | eval lowerwhisker=median-(1.5*iqr) | eval upperwhisker=median+(1.5*iqr) </query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">all</option> <option name="refresh.display">progressbar</option> </viz>  I don't use any "eval" or string words at the "stats", But it happend. How could I solve this problem?   
I'm sorry it's hard to read because I don't understand English and I'm using a translation app. Currently, I am not able to "Premise follow-up" reports in Splunk. Subsequent processing is started w... See more...
I'm sorry it's hard to read because I don't understand English and I'm using a translation app. Currently, I am not able to "Premise follow-up" reports in Splunk. Subsequent processing is started with a margin of time for the completion time of the prerequisite process. However, with this method, there is a risk that subsequent processing will start before the premise process is completed. You have a lot of reports to process, and you don't want to extend the schedule interval. Does anyone know a solution to this challenge?