All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I dont understand the issue . the drop down filter is showing duplicate issues. can anyone pls help me how to resolve?  
Hello team Below are my splunk logs: { body_bytes_sent: 0 bytes_sent: 0 host: nice_host http_content_type: - http_referer: - http_user_agent: - kong_request_id: 8853b73ffef1c5522b4a383c286c8... See more...
Hello team Below are my splunk logs: { body_bytes_sent: 0 bytes_sent: 0 host: nice_host http_content_type: - http_referer: - http_user_agent: - kong_request_id: 8853b73ffef1c5522b4a383c286c825e log_type: kong query_string: - remote_addr: 10.138.100.153 request_id: 93258e0bc529fa9844e0fd2d69168d0f request_length: 1350 request_method: GET request_time: 0.162 scheme: https server_addr: 10.138.100.151 server_protocol: HTTP/1.1 status: 499 time_local: 25/Feb/2024:05:11:24 +0000 upstream_addr: 10.138.103.157:8080 upstream_host: nice_host upstream_response_time: 0.000 uri: /v1/d5a413b6-7d00-4874-b706-17b15b7a140b }   { body_bytes_sent: 0 bytes_sent: 0 host: nice_host http_content_type: - http_referer: - http_user_agent: - kong_request_id: 89cea871feba9f2d5216856f7a884223 log_type: kong query_string: productType=ALL remote_addr: 10.138.100.214 request_id: 9dbf69defb49a3595cf1040e6ab5d4f2 request_length: 1366 request_method: GET request_time: 0.167 scheme: https server_addr: 10.138.100.151 server_protocol: HTTP/1.1 status: 499 time_local: 25/Feb/2024:05:11:24 +0000 upstream_addr: 10.138.98.140:8080 upstream_host: nice_host upstream_response_time: 0.000 uri: /v1/a8b7570f-d0af-4d0d-bd6d-f6cf31892267 } From the above, I want to extract the request_time and upstream_response_time from the log event for the uri "/v1/*" which has query_string is empty(-) I tried the below search query, but it returns result containing query_string as empty and with values(productType=ALL) index="my_indexx" | spath host | search host="nice_host" | eval Operations=case( searchmatch("GET query_string: - /v1/*"),"getCart") | stats avg(request_time) as avg_request_time avg(upstream_response_time) as avg_upstreamTime perc90(request_time) as 90_request_time perc90(upstream_response_time) as 90_upstreamResponseTime by Operations | eval avg_request_time=round(avg_request_time,2) | eval avg_upstreamTime=round(avg_upstreamTime,2) index="ek_cloud_k8sdta_digital_platforms_kong" | spath host | search host="shopping-carts-service-oxygen-dev.apps.stg01.digitalplatforms.aws.emirates.dev" | eval Operations=case( match(_raw, "/v1/[^/ ?]"),"getCart") | stats avg(request_time) as avg_request_time avg(upstream_response_time) as avg_upstreamTime perc90(request_time) as 90_request_time perc90(upstream_response_time) as 90_upstreamResponseTime by Operations | eval avg_request_time=round(avg_request_time,2) | eval avg_upstreamTime=round(avg_upstreamTime,2) Can someone help on this.
Hi, I installed Splunk in a linux server on /opt/splunk. The server has two disks, one 50 GB (sdb1) and another 6 TB (sda1). I want to save /opt/splunk/var  folder (and all of its contents) of Splun... See more...
Hi, I installed Splunk in a linux server on /opt/splunk. The server has two disks, one 50 GB (sdb1) and another 6 TB (sda1). I want to save /opt/splunk/var  folder (and all of its contents) of Splunk to /splunk/var (sda1) which second huge partition is mounted. Actually I want to separate etc and var in case of partition. etc remain on sdb1 and var be in sda1. I need a detailed solution Thanks
Hi, We have a single splunk instance(Linux) hosted in AWS. The current version is Splunk entrprise 7.3.0 and we would like to upgrade to 9.x Could someone please help us with the upgrade path and i... See more...
Hi, We have a single splunk instance(Linux) hosted in AWS. The current version is Splunk entrprise 7.3.0 and we would like to upgrade to 9.x Could someone please help us with the upgrade path and instructions.
Hi, We currently have configured Syncsort to send SYSLOG data to Splunk for dashboards. Is it possible to send application data (example ESDS file) to Splunk using Ironstream ?   Thanks.
I have the following sample events coming from  source="/project/admin/git/ys/es/perf/de/pure/abc0*/logs/*/results.csv" Event1: with no timestamp  and these type of data is in files  which are 2 d... See more...
I have the following sample events coming from  source="/project/admin/git/ys/es/perf/de/pure/abc0*/logs/*/results.csv" Event1: with no timestamp  and these type of data is in files  which are 2 days older abc|pxyz|0.1054|ops|0|null|null   Event 2 with timestamp --these are new files from same location and going forward the data would be the same as below  2024-02-23T00:48:17|AID|read|454482.351348|PS|0|null|null   I want to send data to splunk that has a timestamp and send the other to null queue or not to ingest it . Firstly I tried MAX_DAYS_AGO=2 which did not work then I tried the following props and transforms but it did not work . transforms [filter] REGEX = ^^\D* DEST_KEY = queue FORMAT = nullQueue props.conf CHARSET=AUTO SHOULD_LINEMERGE=false category=Custom disabled=false pulldown_type=true TRANSFORMS-null=filter Thanks in Advance
Hi, I have two separate searches that are working independently (expected count, actual count).  I want to combine the searches to get a percentage for actual count to expected count; however append... See more...
Hi, I have two separate searches that are working independently (expected count, actual count).  I want to combine the searches to get a percentage for actual count to expected count; however append, appendcols, and other ways to add the searches together have so far not worked for me.  Curious if there's a better way to use stats, eval, transaction commands to achieve the combination of these searches. The end goal is to provide a visualization to understand if there's an issue when the actual count does not match the expected count - so open to suggestions on better ways to achieve that goal.   Search 1 (counting all records that are sent through producer class not part of refresh process): index=index | search ("ProducerClass" AND "*Sending message:*") NOT "*REFRESH*" | stats count as actual_count Search 2 (sum of record counts on files processed through opportunity class):  index=index | search "OpportunityClass" AND "Processing file: file_name" | rex field=_raw "Processing file: file_name with (?<record_count>[^\s]+) records" | stats sum(record_count) as expected_count I have tried append like this and it has not worked: index=index | search ("ProducerClass" AND "*Sending message:*" ) NOT "*REFRESH*" | stats count as actual_count | append [ search index=index "OpportunityClass" AND "Processing file: " | rex field=_raw "Processing file: file_name with (?<record_count>[^\s]+) records" | stats sum(record_count) as expected_count] | eval percent =expected_count/actual_count * 100 appendcols similarly did not work ("Aborting Long Running Search").  Assuming I am incorrectly understanding how I am combining these searches and it is causing issues when using append type commands.  Using an OR on the searches works, but unsure how to use other commands to group the results properly after: index=index | search (("ProducerClass" AND "*Sending message:*" ) NOT "*REFRESH*") OR ("OpportunityClass" AND "Processing file: ") | ...
Hi all, We are seeing a scenario where there are a lot of unoptimised searches, dashboards etc which when run are exhausting our CPU on indexers. If some users run resource intensive adhoc searches/... See more...
Hi all, We are seeing a scenario where there are a lot of unoptimised searches, dashboards etc which when run are exhausting our CPU on indexers. If some users run resource intensive adhoc searches/dashboards etc simultaneously, this is becoming a problem as so many searches running together resulting in 'server busy' error at indexer.   1. Is there any way we can throttle CPU/memory usage per user/role/searches? 2. Are there any documents on optimising searches for better performance and less resource usage?
Hello Everyone, Recently I have come across a very strange issue while adding a panel query to a dashboard. Following query working very fine in search but as soon as I add it to dashboard  it says "... See more...
Hello Everyone, Recently I have come across a very strange issue while adding a panel query to a dashboard. Following query working very fine in search but as soon as I add it to dashboard  it says "No results found". Not sure why ? index=app_pl  "com. thehartford.pl.model.exception. Csc ServiceException: null at com.thehartford.pl. rest.UserProfileController.buildUserProfile" | bin span=1h _time I table * I stats count as Failure by _time
I have a lookup table with 2 fields IP and Name IP Name ['1.2.3.4', '2.3.5.0/24'] -> name1 ['1.2.3.4',.6.7.8.9/31, 4.5.6.7,1.1.1.1] -> name2 [3.3.3.3/31, 4.4.4.4] -> name3   I have a list of IP... See more...
I have a lookup table with 2 fields IP and Name IP Name ['1.2.3.4', '2.3.5.0/24'] -> name1 ['1.2.3.4',.6.7.8.9/31, 4.5.6.7,1.1.1.1] -> name2 [3.3.3.3/31, 4.4.4.4] -> name3   I have a list of IPs like "1.2.3.4, 2.3.5.1"   This should give me result of lookup table names where the IPs are present. So in the above example result would be as the "1.2.3.4, 2.3.5.1" are present in first 2 rows. name1 name2  
I am trying to create a search that will generate a report showing host by event count in the last hour and also the average 7 day hourly event count per host. So far i have the below search that ... See more...
I am trying to create a search that will generate a report showing host by event count in the last hour and also the average 7 day hourly event count per host. So far i have the below search that shows host by event count over the last hour - but i am struggling to get a column added showing the weekly hourly average?   | tstats count where index=* by host, index, sourcetype | addtotals | sort -Total | fields - Total | rename count as events_latest_hour   Any help on how i get a column added showing the 7 day hourly average for event count ?
  I have 3 panels for dropdown menu. if A is selected  panel 1 shows Search A panel 2 shows Title and the link to URL panel 3 shows Another Search of its own (if "drop d... See more...
  I have 3 panels for dropdown menu. if A is selected  panel 1 shows Search A panel 2 shows Title and the link to URL panel 3 shows Another Search of its own (if "drop down" is selected A) if B is selected Panel 1 shows Search B  Panel 2 Disappear Panel 3 Disappear if C is selected Panel 1 shows Search C Panel 2 Disappear  Panel 3 Disappear if D is selected Panel 1 shows Search D Panel 2 Disappear  Panel 3 Disappear   <input type="dropdown" token="tokenSearchOption1" searchWhenChanged="true"> <label>Sources</label> <choice value="A">A</choice> <choice value="B">B</choice> <choice value="C">C</choice> <choice value="D">D</choice> <change> <condition value="A"> <set token="tokenSearchQuery"> index= search query A</set> </condition> <condition value="B"> <set token="tokenSearchQuery">index= search query B</set> </condition> <condition value="C"> <set token="tokenSearchQuery">index=search query C</set> </condition> <condition value="D"> <set token="tokenSearchQuery">index= search query D</set> </condition> </change> <initialValue>"A"</initialValue> </input> </panel> </row> <row> <panel id="URL test"> <title>Title URL</title> <html> <!-- <style> .dashboard-row Title .dashboard-panel h2.panel-title { font-size: 40px !important; text-align:left; font-weight:bold; } </style>--> <center> <style>.btn-primary { margin: 5px 10px 5px 0;font-size: 40px !important; }</style> <a href="URL for a website" target="blank" class="btn btn-primary"> Click here </a> </center> </html> </panel> </row> <row> <panel depends=dropdown A> <title>Magic</title> <table> <search> <query>Index=Run this search when drop down A </query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="wrap">false</option> </table> </panel>
I have created 5 Scheduled Reports to send a csv of the search results to the same email. Is there a way to have all 5 reports results csv file be in 1 email ? I tried looking up and someone me... See more...
I have created 5 Scheduled Reports to send a csv of the search results to the same email. Is there a way to have all 5 reports results csv file be in 1 email ? I tried looking up and someone mentioned that making all the search results into 1 report by appending the search results of each report. But in in many of my reports there is a lot going on already, wasn't sure if that method would put on a lot of strain and load time for the results to come up, then again, might mess things around to keep things organized. Each report is giving back a different host results hence making it into 1 big CSV is not applicable. If we can keep it the way it is with 5 different csv. file  reports but just sending 1 email with all 5 attachment that would be the ideal.
Multiple joins cause slowness in splunk dashboard?Is any other way to make faster? how  can we club those joins ?   index="xxx" applicationName="api" environment=$env$ timestamp correlationId trac... See more...
Multiple joins cause slowness in splunk dashboard?Is any other way to make faster? how  can we club those joins ?   index="xxx" applicationName="api" environment=$env$ timestamp correlationId trace message ("Ondemand Started*" OR "Expense Process started") |rename sourceFileName as SourceFileName content.JobName as JobName | eval "FileName/JobName"= coalesce(SourceFileName,JobName) | rename timestamp as Timestamp correlationId as CorrelationId tracePoint as Tracepoint message as Message | eval JobType=case(like('Message',"%Ondemand Started%"),"OnDemand",like('Message',"Expense Process started%"),"Scheduled", true() , "Unknown") | eval Message=trim(Message,"\"") | table Timestamp CorrelationId Tracepoint JobType "FileName/JobName" Message | join CorrelationId type=left [ search index="xxx" applicationName="api" trace=ERROR | rename correlationId as CorrelationId traceas TracePoint message as StatusMessage | dedup CorrelationId | table CorrelationId TracePoint StatusMessage] | table Timestamp CorrelationId TracePoint JobType "FileName/JobName" StatusMessage | join CorrelationId type=left [ search index="xxx" applicationName="api" message="*Before Calling flow archive-Concur*" | rename correlationId as CorrelationId content.loggerPayload.archiveFileName as ArchivedFileName | table CorrelationId ArchivedFileName] | table Timestamp CorrelationId Tracepoint JobType "FileName/JobName" ArchivedFileName StatusMessage    
I'd been intrigued about the idea, I was not able to find anything anywhere about the topic but Is it possible to use generative #AI to create Splunk#dashboards   It's a problem I have been wonderi... See more...
I'd been intrigued about the idea, I was not able to find anything anywhere about the topic but Is it possible to use generative #AI to create Splunk#dashboards   It's a problem I have been wondering if gen AI can augment, as creating dashboards can be a very steep learning curve or tricky to get started. It's a great area that I feel AI would be super useful and I'm sure the geniuses at Splunk are already looking into!
Hi  Friend, I am getting my Big- ip logs in syslog format on my single splunk deployment instance and I am having trouble figuring out the proper way to Change the name of values. TO ( Disabled , Fo... See more...
Hi  Friend, I am getting my Big- ip logs in syslog format on my single splunk deployment instance and I am having trouble figuring out the proper way to Change the name of values. TO ( Disabled , Forced Offline , Enabled)  Note: The "pool_member_new_session_enable 1 pool_member_monitor_state 3" means the pool member is manually Disabled. ** pool_member_update_status 1 pool_member_new_session_enable 1 pool_member_monitor_state 3  **   Note: The "pool_member_new_session_enable 1 pool_member_monitor_state 20" indicates the pool member is manually Forced Offline. **pool_member_update_status 1 pool_member_new_session_enable 1 pool_member_monitor_state 20 **   Note: The "pool_member_new_session_enable 2 pool_member_monitor_state 3" means the pool member is manually Enabled. **pool_member_update_status 1 pool_member_new_session_enable 2 pool_member_monitor_state 3 **  
Hi, I am trying to setup the Ajax handler to capture extra data from the request payload but context.data is always empty. Any idea why its empty? { "url": "https://xyz.com/vevents", "method": "P... See more...
Hi, I am trying to setup the Ajax handler to capture extra data from the request payload but context.data is always empty. Any idea why its empty? { "url": "https://xyz.com/vevents", "method": "POST", "data": "", "responseHeaders": "" } https://docs.appdynamics.com/appd/22.x/22.1/en/end-user-monitoring/browser-monitoring/browser-real-user-monitoring/configure-the-javascript-agent/add-custom-user-data-to-a-page-browser-snapshot#id-.AddCustomUserDatatoaPageBrowserSnapshotv22.1-custom-user-data-exsCustomUserDataExamples
Hi In the Splunk's License Usage Report,  for the "Previous 60 days", the two first charts "Daily License Usage" and "Percentage of Daily License Quota Used" have only 31 columns - instead of the ... See more...
Hi In the Splunk's License Usage Report,  for the "Previous 60 days", the two first charts "Daily License Usage" and "Percentage of Daily License Quota Used" have only 31 columns - instead of the expected 60. Splunk version is 9.0.41 Just wanted to report  
When searching in metrics.log for the indexers in SplunkCloud I'm seeing the following: group=pipeline, name=typing, processor=regexreplacement, cpu_seconds=0.002, executes=838, cumulative_hits=1037... See more...
When searching in metrics.log for the indexers in SplunkCloud I'm seeing the following: group=pipeline, name=typing, processor=regexreplacement, cpu_seconds=0.002, executes=838, cumulative_hits=10378371, in=113716, out.splunk=111870, out.drop=1846 What is out.drop telling me here?  Am I losing data and what do I need to configure to not lose data?
Hello everyone, I am looking for a SPL-solution to determine how long the longest common substring of two strings is. Is there any built-in way to do that? Or is there any app that provides me with... See more...
Hello everyone, I am looking for a SPL-solution to determine how long the longest common substring of two strings is. Is there any built-in way to do that? Or is there any app that provides me with some command for that?   Thanks in Advance!