All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, We have installed Splunk Enterprise Security 7.0.1 and OT for security add-on on it, and we would like to upgrade Splunk Enterprise security  to the latest version ES 7.1.1. would like to u... See more...
Hi All, We have installed Splunk Enterprise Security 7.0.1 and OT for security add-on on it, and we would like to upgrade Splunk Enterprise security  to the latest version ES 7.1.1. would like to understand on, We made a lot of changes on search queries for OT for security add-on to bring-up the data. Will that changes would go away after ES upgrade? Can you please confirm VK.
Hi, I found similar questions but the usual solution of using HEADER_FIELD_LINE_NUMBER did not work. My custom csv sourcetype is working fine, except I'm getting an extra event with the column na... See more...
Hi, I found similar questions but the usual solution of using HEADER_FIELD_LINE_NUMBER did not work. My custom csv sourcetype is working fine, except I'm getting an extra event with the column names. Splunk knows they're column names, it's still treating them as fields so the event has Col1=Col1, Col2=Col2 etc. The csv's all start the same, there's an identical line 1 then and identical line 2 which is the column names. After adding HEADER_FIELD_LINE_NUMBER =2 (in props.conf on the forwarder), I'm still getting events with the column names, but now I'm ALSO getting events with just the first line as well. Am I missing something? Thanks
Hello, I am working on a project to integrate Splunk with a LLM model.  I have created an app that search vulnerabilities within source code and report back findings as well as the modified source ... See more...
Hello, I am working on a project to integrate Splunk with a LLM model.  I have created an app that search vulnerabilities within source code and report back findings as well as the modified source code with the corrected findings. Now, here's my issue, it works for JavaScript, Java, PHP, and many others, but at the moment I upload Python or Simple XML, the system cannot process it as it is thinking is part of the actual dashboard code. My question is, how can I encapsulate the token value example:  '$SourceCode_Tok$' for the system to interpret it correctly as external code to be analyzed. Thank you all!
Hi, Trying to create POC for building custom monitoring extension from (custom written) scripts. Created <machine_agent_home>/monitors/<custom_extension_directory>, in my case that is <machine_... See more...
Hi, Trying to create POC for building custom monitoring extension from (custom written) scripts. Created <machine_agent_home>/monitors/<custom_extension_directory>, in my case that is <machine_agent_home>/monitors/CustomMonitor”. In this directory CustomMonitor created two files. First file is shell script that prints stdout in the format of "name=Custom Metrics|Hardware Resources|$ZONE|$server_name conntrack utilization percent,value="$Ctrack_utilization. This works just fine, tested it by running from the server directly, from command prompt. created file “monitor.xml” with the following content … <monitor>                <name>Hardware Resources</name>                <type>managed</type>                <description>Monitor open file count </description>                <monitor-configuration>                </monitor-configuration>                <monitor-run-task>                               <execution-style>continuous</execution-style>                               <name>Run</name>                               <type>executable</type>                               <task-arguments>                               </task-arguments>                               <executable-task>                                              <type>file</type>                                              <file><shell_script_file_name></file>                               </executable-task>                </monitor-run-task> </monitor> restarted machine agent My data is not visible if i go to Servers > Metric Browser. Tried modifying monitor.xml to use periodic (since i read somewhere that for "continuous" script needs to run in infinite loop like this ... <execution-style>periodic</execution-style> <execution-frequency-in-seconds>60</execution-frequency-in-seconds> <execution-timeout-in-secs>30</execution-timeout-in-secs> but my data is still not visible in Metric Browser. Tried with different format of stdout like this ... name=Hardware Resources|$ZONE|$server_name conntrack utilization percent,value="$Ctrack_utilization but that did not help neither. Just to mentioned that in Metric Browser I see relevant server with default metrics, this is what i see under this server (did not include it in the image in order not to reveille the actual server name) I was hoping to have another expandable folder with the caption of the value of variable $ZONE. Has anyone successfully managed to get this running?
So I have been trying to get SC4S working and I know where the docs are--> https://splunk.github.io/splunk-connect-for-syslog/main/ I have followed this well and gotten SC4S event in and have teste... See more...
So I have been trying to get SC4S working and I know where the docs are--> https://splunk.github.io/splunk-connect-for-syslog/main/ I have followed this well and gotten SC4S event in and have tested HEC data in successfully. However for some reason I cant get Epo data in I am attaching image of my env_file and proof that I got SC4S working.  as well as a concern. I need to know what I should change. Thanks.
I'm reviewing some aspects of Splunk Cloud platform related to Workload management and got myself with 3 questions: - There's any way to change the default workload pool value? - There's any way ... See more...
I'm reviewing some aspects of Splunk Cloud platform related to Workload management and got myself with 3 questions: - There's any way to change the default workload pool value? - There's any way to use admission rules based on rule/user? - Can you share some examples for using an Admission rule beside a search restriction when creating a role?
Hello All, I need your help to write SPL for fetching details of events that occur when users reach or cross the threshold limit defined as per their role for executing Splunk searches. Any informa... See more...
Hello All, I need your help to write SPL for fetching details of events that occur when users reach or cross the threshold limit defined as per their role for executing Splunk searches. Any information would be very helpful. Thank you
Hello, I am trying to figured out how I could list a report by showing the total number of policies in my query.  I have the sample Event below:     { [-] auth : { [-] display_name: s... See more...
Hello, I am trying to figured out how I could list a report by showing the total number of policies in my query.  I have the sample Event below:     { [-] auth : { [-] display_name: sample-name policies: [ [-] default admin ] } type: request }     So, when I am using a search query below, I got a result of number of display_name. type="request" | stats count by auth.display_name However,  what I need is to show me the result count of the policies which in this case the default and admin. I am using the query below but it does not give me any result. type="request" | stats count by auth.policies Would someone be able to guide me what is the correct syntax to use to get the result I want?
Crash log Crashing thread: FwdDataReceiverThread Registers: RIP: [0x00007F412B89E70F] gsignal + 271 (libc.so.6 + 0x3770F) RDI: [0x0000000000000002] RSI: [0x00007F41097FE060] RBP:... See more...
Crash log Crashing thread: FwdDataReceiverThread Registers: RIP: [0x00007F412B89E70F] gsignal + 271 (libc.so.6 + 0x3770F) RDI: [0x0000000000000002] RSI: [0x00007F41097FE060] RBP: [0x00007F412B9EEC28] RSP: [0x00007F41097FE060] RAX: [0x0000000000000000] RBX: [0x0000000000000006] RCX: [0x00007F412B89E70F] RDX: [0x0000000000000000] R8: [0x0000000000000000] R9: [0x00007F41097FE060] R10: [0x0000000000000008] R11: [0x0000000000000246] R12: [0x000055B181DD32C8] R13: [0x000055B181D2B95A] R14: [0x0000000000000C9A] R15: [0x000000000000080B] EFL: [0x0000000000000246] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x002B000000000033] OLDMASK: [0x0000000000000000] OS: Linux Arch: x86-64 Backtrace (PIC build): [0x00007F412B89E70F] gsignal + 271 (libc.so.6 + 0x3770F) [0x00007F412B888B25] abort + 295 (libc.so.6 + 0x21B25) [0x00007F412B8889F9] ? (libc.so.6 + 0x219F9) [0x00007F412B896CC6] ? (libc.so.6 + 0x2FCC6) [0x000055B17FCC89D7] CookedTcpChannel::kickOutput() + 791 (splunkd + 0x19B09D7) [0x000055B17FCCC608] CookedTcpChannel::sendACK_unlocked(bool) + 168 (splunkd + 0x19B4608) [0x000055B17FCD6E2D] CookedTcpChannel::addUncommitedEventId(unsigned long) + 109 (splunkd + 0x19BEE2D) [0x000055B17FCD6F2E] CookedTcpChannel::s2sDataAvailable(CowPipelineData&, S2SPerEventInfo const&, unsigned long) + 190 (splunkd + 0x19BEF2E) [0x000055B17FCD7020] FwdDataChannel::s2sDataAvailable(CowPipelineData&, S2SPerEventInfo const&, unsigned long) + 96 (splunkd + 0x19BF020) [0x000055B18072E3CD] S2SReceiver::gotOlds2sEvent(CowPipelineData&, S2SPerEventInfo const&) + 381 (splunkd + 0x24163CD) [0x000055B1805196AE] StreamingS2SParser::parse(char const*, char const*) + 11710 (splunkd + 0x22016AE) [0x000055B17FCC8B24] CookedTcpChannel::consume(TcpAsyncDataBuffer&) + 244 (splunkd + 0x19B0B24) [0x000055B17FCCB08D] CookedTcpChannel::dataAvailable(TcpAsyncDataBuffer&) + 45 (splunkd + 0x19B308D) [0x000055B1809D7973] TcpChannel::when_events(PollableDescriptor) + 531 (splunkd + 0x26BF973) [0x000055B18092355C] PolledFd::do_event() + 124 (splunkd + 0x260B55C) [0x000055B1809244D0] EventLoop::run() + 624 (splunkd + 0x260C4D0) [0x000055B1809D269C] Base_TcpChannelLoop::_do_run() + 28 (splunkd + 0x26BA69C) [0x000055B1809D279E] SubordinateTcpChannelLoop::run() + 222 (splunkd + 0x26BA79E) [0x000055B1809DF4D7] Thread::callMain(void*) + 135 (splunkd + 0x26C74D7) [0x00007F412BC312DE] ? (libpthread.so.0 + 0x82DE) [0x00007F412B962E83] clone + 67 (libc.so.6 + 0xFBE83)
I have a lookup table from which I need to read the IP addresses one by one, perform calculations on each address, and then place the results in a new field. Here is my query that performs calculat... See more...
I have a lookup table from which I need to read the IP addresses one by one, perform calculations on each address, and then place the results in a new field. Here is my query that performs calculations on the input IP address.   "IP-Address" NOT Audit NOT "%BGP_SESSION-5-ADJCHANGE" NOT "%BGP-3-NOTIFICATION" NOT "%BGP-5-NBR_RESET" NOT passive NOT "%BGP-3-BGP_NO_REMOTE_READ" AND "%BGP-5-ADJCHANGE: neighbor" | rex "%BGP-5-ADJCHANGE: neighbor (?P<Interface_Name>(.+))," | transaction host startswith="Down" endswith="Up" keepevicted=true | eval duration=if(duration==0,now()-_time,duration) | convert rmunit(duration) as numSecs | stats sum(numSecs) as Downtime_Duration | eval Uptime_duration = 2592000 - Downtime_Duration | eval Availability_Percentage = Uptime_duration / 2592000 | eval string_Downdur = tostring(round(Downtime_Duration), "duration") | eval string_Updur = tostring(round(Uptime_duration), "duration") | eval formatted_Downdur = replace(string_Downdur,"(?:(\d+)\+)?0?(\d+):0?(\d+):0?(\d+)","\1d \2h \3m \4s") | eval formatted_Updur = replace(string_Updur,"(?:(\d+)\+)?0?(\d+):0?(\d+):0?(\d+)","\1d \2h \3m \4s") | eval stringDownSecs = replace(formatted_Downdur, "^d (0h (0m )?)?","") | eval stringUpSecs = replace(formatted_Updur, "^d (0h (0m )?)?","") | table host, Interface_Name, _time, stringDownSecs, stringUpSecs, Availability_Percentage | rename _time AS "Interface Down", host AS "Reporting Host", stringDownSecs AS "Total Downtime", stringUpSecs AS "Total Uptime" Also, this is a query that shows how to read the lookup table. | inputlookup LinkTunnelMap | table TunnelIP The question is: how can I retrieve the IP addresses from this lookup table one by one, perform calculations on each address, and store the results in a new field like a for-loop process?
So i am trying to link this to a token from another panel but since "message_id" is a created field, it doesn't work. Is there anyway I can make this work?   <index> <host> | eval message_id=AREA... See more...
So i am trying to link this to a token from another panel but since "message_id" is a created field, it doesn't work. Is there anyway I can make this work?   <index> <host> | eval message_id=AREA.SUBID | stats count by USER, TEXT | search message_id="$messid1$" | sort - count  
I am new to splunk, I have event like below, the URL value has two double quote, when I extract the URL value, it always show empty value. Is there a way to remove outside double quotes? 2023-04-20T... See more...
I am new to splunk, I have event like below, the URL value has two double quote, when I extract the URL value, it always show empty value. Is there a way to remove outside double quotes? 2023-04-20T16:06:08.595+0000 SFDCLogType="ApexCallout" EVENT_TYPE="ApexCallout" TIMESTAMP="20230420160608.595" REQUEST_ID="TID:606896510000061f42" RUN_TIME="45" CPU_TIME="-1" URI="CALLOUT-LOG"  TYPE="OData" METHOD="GET" SUCCESS="0" TIME="35" REQUEST_SIZE="-1" RESPONSE_SIZE="0" URL=""https://XXX.test.com:443/api/service/"" TIMESTAMP_DERIVED="2023-04-20T16:06:08.595Z"
Hi, I am new to AppDynamics. I am trying to use AppDynamics to monitor some key features of my application. I am using the C++ sdk and I have some behaviors that I don't understand.   How is... See more...
Hi, I am new to AppDynamics. I am trying to use AppDynamics to monitor some key features of my application. I am using the C++ sdk and I have some behaviors that I don't understand.   How is it possible that the agent tier differs from the Tier displayed by AppDynamics (which is here ISS-MCSMS-APP) ?   As my application is able to gather all the information of a transaction in one place, I create the corresponding transaction for AppDynamics agent in a single thread (it means that I overwrite the start time and the processing time for the bt and all exit calls). As my application has a lot of different services, I want to display the response time per service (Tier). Therefore, I declare them as backend and for each transaction, I add the exit calls. However, I don't want them to be displayed as outside webservice, so I add a appd_context for each Tier, and I create a correlated bt for each exit call.   The code looks like:   // pcp_trans is a pointer of transaction that gather all information appd_bt_handle lc_btHandle = appd_bt_begin( "MY_BT", NULL); appd_bt_override_start_time_ms(lc_btHandle, pcp_trans->startTime); appd_bt_override_time_ms(lc_btHandle, pcp_trans->procTime); for (auto& lcr_extCall: pcp_trans->exitCall) // vector of exit calls { appd_exitcall_handle lc_ecHandle = appd_exitcall_begin(lc_btHandle, lcr_extCall.name); // name is the Tier name appd_exitcall_override_start_time_ms(lc_ecHandle, lcr_extCall.startTime); appd_exitcall_override_time_ms(lc_ecHandle, lcr_extCall.procTime); /* * Here I add the appd context for the service if the context does not exist */ const char* lz_corhdr = appd_exitcall_get_correlation_header(lc_ecHandle); appd_bt_handle lc_corrbtHandle = appd_bt_begin_with_app_context(lcr_extCall.context, "MY_BT", lz_corhdr); appd_bt_override_start_time_ms(lc_corrbtHandle, lcr_extCall.startTime); appd_bt_override_time_ms(lc_corrbtHandle, lcr_extCall.procTime); appd_bt_end(lc_corrbtHandle); appd_exitcall_end(lc_ecHandle); } appd_bt_end(lc_btHandle); } Can someone tell me what's wrong ? Thanks.
Hi, I have been working on configuring a universal forwarder on a free Splunk Cloud trial. I have been using the the link below to setup the forwarder: https://docs.splunk.com/Documentation/Forwarde... See more...
Hi, I have been working on configuring a universal forwarder on a free Splunk Cloud trial. I have been using the the link below to setup the forwarder: https://docs.splunk.com/Documentation/Forwarder/9.0.4/Forwarder/Configuretheuniversalforwarder?ref=hk#Configuretheuniversalforwarder#Configure_the_universal_forwarder There are three files that are missing from the folder specified under Find the configuration files which are: inputs.conf  outputs.conf deploymentclient.conf  Is that meant to happen in a free Splunk Cloud trial?
My input tag looks like this     </input> <input type="multiselect" token="fruit_name"> <label>Fruit name</label> <choice value="*">All</choice> <default>*</default> ... See more...
My input tag looks like this     </input> <input type="multiselect" token="fruit_name"> <label>Fruit name</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>fruit_name</fieldForLabel> <fieldForValue>fruit_name</fieldForValue> <search> <query>index=splunk query | mvexpand fruit_name </query> </search> <delimiter> </delimiter> <change> <eval token="fruit"> <expression>if(isnull($fruit_name$), "//*", $fruit_name$)</expression> <default>*</default> </eval> </change> </input>     I use `fruit` later in a panel in the XML dashboard. The issue is if user doesn’t select anything from the drop down, `fruit_name` still holds value '*' but the panel which uses `fruit` just shows a message `Search is waiting for input...`. How can I make sure that when user doesn’t select anything, along with `fruit_name` `fruit` also holds the default value which is '*'. Please help.
Hello,  I have below search query     index=my_index openshift_cluster="cluster009" sourcetype=openshift_logs openshift_namespace=my_ns ("POST /online-shopping/debt_cart/v1 HTTP" OR "GET /o... See more...
Hello,  I have below search query     index=my_index openshift_cluster="cluster009" sourcetype=openshift_logs openshift_namespace=my_ns ("POST /online-shopping/debt_cart/v1 HTTP" OR "GET /online-shopping/debt_cart/v1/* HTTP" OR "GET *online-shopping*debt_cart*productType* HTTP") | eval Operations=case( searchmatch("POST /online-shopping/debt_cart/v1 HTTP"),"create_cart", searchmatch("GET /online-shopping/debt_cart/v1/*/summary HTTP"),"cart_summary", searchmatch("GET *online-shopping*debt_cart*productType* HTTP"),"cart_productType", match(_raw, "GET /online-shopping/debt_cart/v1/[^/ ?]+\sHTTP"),"getDebtCart") | stats avg(processDuration) as average perc90(processDuration) as response90 by Operations | eval average=round(average,2),response90=round(response90,2)     which displays the data as below: Operations average response90 create_cart 250 380 cart_summary 240 330 cart_productType 210 321 getDebtCart 260 365   Now I want to add the count of url pattern against each operation as below. I tried adding the count as part of stats. It is not working.  Not sure how do I proceed. Operations count average response90 create_cart 1919 250 380 cart_summary 2001 240 330 cart_productType 1971 210 321 getDebtCart 8162 260 365
Example field value in "Field1" Test1: Successful Test2: 200 Type: Http; Auth: ** URL: abc.com..... IP--Address: xx.xxx.xx.xx Name: xxxxx Path Location: /hdkdsd-/hkk/gdjshd Level: abc User:  xxx Si... See more...
Example field value in "Field1" Test1: Successful Test2: 200 Type: Http; Auth: ** URL: abc.com..... IP--Address: xx.xxx.xx.xx Name: xxxxx Path Location: /hdkdsd-/hkk/gdjshd Level: abc User:  xxx Site: vjsdjsd   Below query not returning any value: index=xxx |  rex field=Field1 "Test2\:\s+(?<A1>\d+)\s+" |  rex field=Field1 "URL\:\s+(?<A2>\w+)\s+" |  rex field=Field1 "User\:\s+(?<A3>\w+)\s+" | table A1, A2, A3
Hello Guys,  I have some doubt about data event correlation. i am getting events from different different security vendors like Area1 bitdefender, crowdstrike, cloudflare.  now i wants in my splunk... See more...
Hello Guys,  I have some doubt about data event correlation. i am getting events from different different security vendors like Area1 bitdefender, crowdstrike, cloudflare.  now i wants in my splunk the event correlation should happen automatically. suppose any incident happen in any endpoint it should correlate that event with other data sources as well. not sure how to achieve this  or i am getting the events from different vendor so in some event the IP is listed as source IP in some event it listed computer IP to normalize it for all the data sources so it will be the same for all data sources.  any lead will be appricatebale  Thanks  Jeewan 
I am looking to find all scheduled searches within the environment that are using a timeframe of 'All time' e.g. if a search is scheduled to run every hour and is using timeframe of 'All time', I wou... See more...
I am looking to find all scheduled searches within the environment that are using a timeframe of 'All time' e.g. if a search is scheduled to run every hour and is using timeframe of 'All time', I would like to change that search to use 'Last 60 minutes' instead. Any helpful searches would be appreciated!
I am looking for a rest endpoint to be able to attach the source file to the event. You can do this through the browser, and through the rest api I found an endpoint that only allows you to download ... See more...
I am looking for a rest endpoint to be able to attach the source file to the event. You can do this through the browser, and through the rest api I found an endpoint that only allows you to download the metadata of a given file. Is it possible to upload a source file via rest api? https://docs.splunk.com/Documentation/SOAR/current/PlatformAPI/RESTVault - only GET methods for metadata, like hash, filename etc.