All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi at all, did anyone experienced the "Dismiss Azure Alert" Workflow Action in the Splunk Add-on for Microsoft Azure App runned by Enterprise Security? I have to configure it but I never did it. ... See more...
Hi at all, did anyone experienced the "Dismiss Azure Alert" Workflow Action in the Splunk Add-on for Microsoft Azure App runned by Enterprise Security? I have to configure it but I never did it. reading documentation, it seems that it's all configured and it should run without any problem. Does it request some special configuration or something that requests a special attention? Thank you for your time. Ciao. Giuseppe
Hi all, I need to create 2 drop down fields that depend on each other from several source:   | makeresults | eval name = "a" | eval value = mvappend("1","2","3","4","5") | union [| makeresu... See more...
Hi all, I need to create 2 drop down fields that depend on each other from several source:   | makeresults | eval name = "a" | eval value = mvappend("1","2","3","4","5") | union [| makeresults | eval name = "b" | eval value = mvappend("a","b","c","d","e") ] | union [| makeresults | eval name = "c" | eval value = mvappend("qq","ss","ff","gg","rr") ] | table name value | stats values(*) as * by name value When I choose A, I get only A values. B - only B values and etc. The queries are coming from several sources, so I can't append or union, I need to create token for each value. Please assist Name Value A A values
Dear Community, Lets say I was running a search for an hour period from 10:00 until 11:00 and we had a particular transaction that consisted of 2 or more events - the first occurring at 09:59 and t... See more...
Dear Community, Lets say I was running a search for an hour period from 10:00 until 11:00 and we had a particular transaction that consisted of 2 or more events - the first occurring at 09:59 and the last at 10:01.  Using the default Transaction command any events which occurred before 10:00 would not be included and we would therefore not be viewing the whole transaction. Likewise, if a transaction started at 10:59 and didn't end until 11:01, any events which occurred after 11:00 would be dropped. Is there any way to include all events related to transactions which started or ended during the specified search time range? Conversely, If this is not possible it would be helpful to drop any transactions which did not start and end within the time range - is there any way to achieve this? Kind regards, Ben
Hi, I just installed UBA on RHEL 8.4 and started it first time. However it failed to start. Then I tried to stop-all and start-all to find which service is wrong. It looks like HDBS data node is not ... See more...
Hi, I just installed UBA on RHEL 8.4 and started it first time. However it failed to start. Then I tried to stop-all and start-all to find which service is wrong. It looks like HDBS data node is not failed, but I don't know to fix it individually. Can anyone give me a hand?   Tue Dec 20 18:57:36 CST 2022: Running: /opt/caspida/bin/Caspida start-service hive-metastore Hive tables are accessible Looking for live HDFS datanodes report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 No HDFS datanodes found, check if the required ports are open Refer to installation instructions for the list of ports that are required to be open between the caspida cluster nodes
Hi, Hello. I have few images in my dashboard. After search I would like to add value (e.g.: order number) to each image and after that use this value as input to drilldown. So when I click on image... See more...
Hi, Hello. I have few images in my dashboard. After search I would like to add value (e.g.: order number) to each image and after that use this value as input to drilldown. So when I click on image another search start with the order number token.   Images are in svg Is this possible?  Thank you 
Hi All,   I want to create Multiple tables/Panels inside a dashboard which will have static message like DASHBAORD A, DASHBAORD B, DASHBAORD C etc.. These message's will drill down to respective ... See more...
Hi All,   I want to create Multiple tables/Panels inside a dashboard which will have static message like DASHBAORD A, DASHBAORD B, DASHBAORD C etc.. These message's will drill down to respective dashboards A,B and C.  Currently i am using a query : index=* | head 1 | eval DashboardName="Dashboard A" |table DashboardName Is there a way to put a query with static message without to go and search a a set of events using index,source or sourcetype. I don't want to unnecessary use this.
Hi, I need to Connect to Splunk Enterprise that is hosted within a VM from my Local Machine using Python. I tried with the Port 8000, The connection seemed to be established but I cannot do anythin... See more...
Hi, I need to Connect to Splunk Enterprise that is hosted within a VM from my Local Machine using Python. I tried with the Port 8000, The connection seemed to be established but I cannot do anything further like querying etc.    Thanks & Regards
Hello Team i am using syslog for logs ingestion of solaris servers. I can see results for tcpdump host solarisServer. but logs are not visible on search head
Hi all, I use following simple props.conf to some json type events: [my:sourcetype] category = Structured DATETIME_CONFIG = LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED... See more...
Hi all, I use following simple props.conf to some json type events: [my:sourcetype] category = Structured DATETIME_CONFIG = LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=json TIME_FORMAT=%s disabled=false pulldown_type=true SHOULD_LINEMERGE=false TIMESTAMP_FIELDS=timestamp   The event looks like following: {"access_device": {"browser": "Edge Chromium", "browser_version": "108.0.1462.54", "epkey": null, "flash_version": "uninstalled", "hostname": null, "ip": "192.168.182.230", "is_encryption_enabled": "unknown", "is_firewall_enabled": "unknown", "is_password_set": "unknown", "java_version": "uninstalled", "location": {"city": "Bestine", "country": "Tatooine", "state": "Central and Western District"}, "os": "Windows", "os_version": "10"}, "adaptive_trust_assessments": {}, "alias": "unknown", "application": {"key": "ABCDEFG1234567", "name": "[UAT] Hello World App"}, "auth_device": {"ip": null, "key": null, "location": {"city": null, "country": null, "state": null}, "name": null}, "email": null, "event_type": "authentication", "factor": "not_available", "isotimestamp": "2022-12-20T09:14:08.755759+00:00", "ood_software": null, "reason": "allow_unenrolled_user", "result": "success", "timestamp": 1671527648, "txid": "c571233d-b357-3f07-e126-ca2623b8e0d9", "user": {"groups": [], "key": null, "name": "luke"}, "eventtype": "authentication", "host": "jedi1.mydomain.com"}   It works when i test it through upload log file by setting sourcetype to my:sourcetype.  Fields and timestamp can be extracted. However, when events are being fed from UF, the timestamp can't be extracted and  using the file modified time as timestamp instead. Tried to add 'TIME_PREFIX=timestamp": ' but didn't help. Would anyone please help? Thanks and Regards  
Hi, I have the following events in Splunk { "field1": "something", "execution_times": { "service1": 100, "service2": 400, (...) "service_N": 600, }, "field2": "s... See more...
Hi, I have the following events in Splunk { "field1": "something", "execution_times": { "service1": 100, "service2": 400, (...) "service_N": 600, }, "field2": "something" } How can I create a multiline chart that would show p90 + p99 of each "service" in JSON map "execution_times" based on the values [here 100, 400, (...) 600]. The query should produce a chart with N*2 (for p90 and p99) different time series (lines) based on all "services" that were inside events. Each event can contain different "services" in its execution_times map. Thanks
Good day All! i have created a lookup data | inputlookup Autosys.csv and i have fileds KB,REGION AND JOB_NAME. i have splunk search which i have some job data . how can ingest other fields in t... See more...
Good day All! i have created a lookup data | inputlookup Autosys.csv and i have fileds KB,REGION AND JOB_NAME. i have splunk search which i have some job data . how can ingest other fields in the lookup using JOB_NAME as common fileds splunk events below is the search which i want to add lookup data   index=index_name sourcetype=source_name | eval Actualstarttime=strftime(strptime(NEXT_START,"%Y/%m/%d %H:%M:%S"),"%H:%M") | eval Job_start_by=strftime(strptime(LAST_START,"%Y/%m/%d %H:%M:%S"),"%H:%M") | eval START_SLA=if(Job_start_by <= Actualstarttime,"Started On Time","Started Late") | eval END_SLA=if(RUNTIME <= AVG_RUN_TIME,"END ONTIME","END SLA BREACH") | search NEXT_START!=NULL | table JOB_NAME,JOB_GROUP,TIMEZONE,STATUS,Currenttime,STATUS_TIME,LAST_START,LAST_END,NEXT_START,DAYS_OF_WEEK,EXCLUDE_CALENDAR,RUNTIME,Actualstarttime,Job_start_by,START_SLA,AVG_RUN_TIME,END_SLA  
Hello everyone, currently our Indexers keep crashing randomly.  We're only running Linux OS, within Splunk 9.0.2. Any suggestions what the Crashing thread means and how to solve that? Thank you... See more...
Hello everyone, currently our Indexers keep crashing randomly.  We're only running Linux OS, within Splunk 9.0.2. Any suggestions what the Crashing thread means and how to solve that? Thank you. Received fatal signal 6 (Aborted) on PID 235655. Cause: Signal sent by PID 235655 running under UID 1018. Crashing thread: FwdDataReceiverThread Registers: RIP: [0x00007F4A05C3E387] gsignal + 55 (libc.so.6 + 0x36387) RDI: [0x0000000000039887] RSI: [0x00000000000399C9] RBP: [0x000000000000008F] RSP: [0x00007F49E4FFE238] RAX: [0x0000000000000000] RBX: [0x000055B8710F5CA8] RCX: [0xFFFFFFFFFFFFFFFF] RDX: [0x0000000000000006] R8: [0x00007F49E4FFF700] R9: [0x00007F4A05C552CD] R10: [0x0000000000000008] R11: [0x0000000000000206] R12: [0x000055B870FE5A93] R13: [0x000055B8710F5D88] R14: [0x000055B872226488] R15: [0x00007F49E4FFE4E0] EFL: [0x0000000000000206] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x0000000000000033] OLDMASK: [0x0000000000000000] OS: Linux Arch: x86-64 Backtrace (PIC build): [0x00007F4A05C3E387] gsignal + 55 (libc.so.6 + 0x36387) [0x00007F4A05C3FA78] abort + 328 (libc.so.6 + 0x37A78) [0x000055B86E1D4D26] ? (splunkd + 0x1A08D26) [0x000055B86EE39BD2] _ZN26HealthDistIngestionLatency29calculateAndUpdateHealthColorEv + 914 (splunkd + 0x266DBD2) [0x000055B86E744627] _ZN22TcpInPipelineProcessor7processER15CowPipelineData + 199 (splunkd + 0x1F78627) [0x000055B86E74CD57] _ZN14FwdDataChannel16s2sDataAvailableER15CowPipelineDataRK15S2SPerEventInfom + 167 (splunkd + 0x1F80D57) [0x000055B86F2B2255] _ZN11S2SReceiver11finishEventEv + 261 (splunkd + 0x2AE6255) [0x000055B86F059E48] _ZN18StreamingS2SParser5parseEPKcS1_ + 6520 (splunkd + 0x288DE48) [0x000055B86E73E004] _ZN16CookedTcpChannel7consumeER18TcpAsyncDataBuffer + 244 (splunkd + 0x1F72004) [0x000055B86E74055D] _ZN16CookedTcpChannel13dataAvailableER18TcpAsyncDataBuffer + 45 (splunkd + 0x1F7455D) [0x000055B86F592D03] _ZN10TcpChannel11when_eventsE18PollableDescriptor + 531 (splunkd + 0x2DC6D03) [0x000055B86F4D5BCC] _ZN8PolledFd8do_eventEv + 124 (splunkd + 0x2D09BCC) [0x000055B86F4D6B39] _ZN9EventLoop3runEv + 617 (splunkd + 0x2D0AB39) [0x000055B86F58D68C] _ZN19Base_TcpChannelLoop7_do_runEv + 28 (splunkd + 0x2DC168C) [0x000055B86F58D78E] _ZN25SubordinateTcpChannelLoop3runEv + 222 (splunkd + 0x2DC178E) [0x000055B86F59A16D] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 13 (splunkd + 0x2DCE16D) [0x000055B86F59B062] _ZN6Thread8callMainEPv + 178 (splunkd + 0x2DCF062)
Hi Splunk community, I need to display data shown as table below Component Total units Violated units Matched [%] Type A 1 1 99 Type B 10 10 75 Type C ... See more...
Hi Splunk community, I need to display data shown as table below Component Total units Violated units Matched [%] Type A 1 1 99 Type B 10 10 75 Type C 100 85 85 Total 111 96 86   In the total row, the matched value is the average of the column, while others are the sum value. Is it possible to insert the average value to the total row as shown? Here's my SPL:   index="my_index"source="*sourcename*" | stats count as total_units count(eval(isnull(approval_message))) as violated_units values(matched_percentage) as matched by component | addcoltotals total_units + violated_units labelfield=component | rename total_units as "Total Units", violated_units as "Violated Units", matched as "Matched [%]"      
I am seeing below One errors and One Info log on my Splunk Cloud.  I am not able to fetch data 1. socket error from 127.0.0.1:52108 while accessing /en-US/: Connection closed by peer 2. 12-20-202... See more...
I am seeing below One errors and One Info log on my Splunk Cloud.  I am not able to fetch data 1. socket error from 127.0.0.1:52108 while accessing /en-US/: Connection closed by peer 2. 12-20-2022 05:59:00.096 +0000 INFO ExecProcessor [24710 ExecProcessorSchedulerThread] - setting reschedule_ms=59904, for command=/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/search/bin/quarantine_files.py     I am using trial version of splunk cloud. Any suggestion here
Hi Splunk Inc. Team, I'm experiencing issues with truncation across all sourcetypes  "OktaIM2:*" where in most cases only TRUNCATE=250000 can resolve. I also detected an issue with LINE_BREAKER r... See more...
Hi Splunk Inc. Team, I'm experiencing issues with truncation across all sourcetypes  "OktaIM2:*" where in most cases only TRUNCATE=250000 can resolve. I also detected an issue with LINE_BREAKER regex pattern for sourcetype=OktaIM2:group causing logs not to be ingested. ...current pattern defaults to: ([\r\n]+) and I had to modify to: (?<=\}\}\})(\, )   Can we please have these issues addressed and a new version cut for this Add-on in splunkbase? Thank you.
Alerts suddenly stopped in my local instance, i am getting the error like in the above image, can anyone please suggest the solution for this, i didn't change my email password, its same when it... See more...
Alerts suddenly stopped in my local instance, i am getting the error like in the above image, can anyone please suggest the solution for this, i didn't change my email password, its same when it is in working and now also. I configured my outlook mail with an app password. I recreate the app password and configure but i am still facing the issue.   Thankyou.
Hi , After onboarding trendmicro XDR we are facing few issue.  1. Getting logs in JSON format  2. Data is not pursed.    Queries 1.Can you please help us out how to convert the ... See more...
Hi , After onboarding trendmicro XDR we are facing few issue.  1. Getting logs in JSON format  2. Data is not pursed.    Queries 1.Can you please help us out how to convert the data from JSON format to raw logs  2. How to purse the data not getting any add on.   Note: attaching snap  We are getting data and in below there is an option as show as raw text when we are clicking on it is coming in same line. Kindly help us out how to solve this issue   Thanks Debjit
Hi All,   I have integrated Splunk HEC with springboot .when i hit application and checked in splunk am unable to see logs in splunk search with given index .am using source type as log4j2  Can... See more...
Hi All,   I have integrated Splunk HEC with springboot .when i hit application and checked in splunk am unable to see logs in splunk search with given index .am using source type as log4j2  Can any one help me .   Thanks in advance
Hi, I have 2 searches. 1st query: (100 results including duplicate number)     index="abc" message.appName=app1 "Description"="After some string*" | table _time Id number     2nd quer... See more...
Hi, I have 2 searches. 1st query: (100 results including duplicate number)     index="abc" message.appName=app1 "Description"="After some string*" | table _time Id number     2nd query:(80 results including duplicate d_number)     index="abc" message.appName=app2 "Description"="After some string2*" | table _time d_Id d_number      both d_number & number are matching How to get result-> only those number which are not matched with d_number I need only 100-80=20 number which may contain duplicate values from 1st query. (eg: query1-query2) Thank you in advance for your answer.
It seems that the KV Store is enabled by default on all servers.  On non-SHs, if we set [kvstore] disabled = true and upgrade from Splunk 8.1.x to Splunk 9.0.x Will the Storage Engine migrate fr... See more...
It seems that the KV Store is enabled by default on all servers.  On non-SHs, if we set [kvstore] disabled = true and upgrade from Splunk 8.1.x to Splunk 9.0.x Will the Storage Engine migrate from MMAPv1 to WiredTiger? Will the Server Version upgrade to 4.2.17? I know that if [kvstore] disabled = false, the upgrade should migrate to WiredTiger and server version 4.2.17.  I am just wondering if the migration and upgrade happen whether or not the kvstore is enabled. I may need to test this out in the lab