All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have some data that is being collected from an AWS lambda and delivered to Splunk via HEC with the listeners on the indexers. This data contains Japanese characters but is not displaying properly... See more...
I have some data that is being collected from an AWS lambda and delivered to Splunk via HEC with the listeners on the indexers. This data contains Japanese characters but is not displaying properly in SplunkWeb. I have applied a host level stanza on both the search head and indexers to CHARSET = SHIFT-JIS, however, the data is still displayed as question marks in SplunkWeb. I have tried AUTO, UTF-8 and SHIFT-JIS without success.
In SHC with the version 8.2.10, from time to time we found this type of ERROR messages from SHCRepJob as below; - splunkd.log from a SHC member 05-24-2023 17:39:31.941 +0000 ERROR SHCRepJob [54418 ... See more...
In SHC with the version 8.2.10, from time to time we found this type of ERROR messages from SHCRepJob as below; - splunkd.log from a SHC member 05-24-2023 17:39:31.941 +0000 ERROR SHCRepJob [54418 SHPPushExecutorWorker-0] - failed job=SHPRepJob peer="<PEER1 FQDN>", guid="PEER1C47-1E44-48A0-A0F2-35DE6E449C65" aid=1684949135.77748_B2392C47-1E44-48A0-A0F2-35DE6E449C65, tgtPeer="<PEER2 FQDN>", tgtGuid="PEER2D44-E56B-4ABA-822A-4C40ACF1E484", tgtRP=<ReplicationPort>, useSSL=false tgt_hp=10.9.129.18:8089 tgt_guid=PEER2D44-E56B-4ABA-822A-4C40ACF1E484 err=uri=https://PEER1:8089/services/shcluster/member/artifacts/1684949135.77748_PEER1C47-1E44-48A0-A0F2-35DE6E449C65/replicate?output_mode=json, error=500 - Failed to trigger replication (artifact='1684949135.77748_PEER1C47-1E44-48A0-A0F2-35DE6E449C65') (err='event=SHPSlave::replicateArtifactTo invalid status=alive to be a source for replication') We used to have bundle replication issues but searches appear to be running and completing as expected. Is this something to worry or why does this happen?  
Sample event     { durationMs: 83 properties: { request-id: 1c910793-8be4-4850-83d5-f360b4b05478 method: GET path: /scenarios/636d40506930b10b8f082f27 } }  ... See more...
Sample event     { durationMs: 83 properties: { request-id: 1c910793-8be4-4850-83d5-f360b4b05478 method: GET path: /scenarios/636d40506930b10b8f082f27 } }     I am trying to create a table of the counts by properties.path I want to combine some of the rows into single path /scenarios/{id} But my replace('properties.path') is giving empty value as seen in column values(path), please help me take a look why replace doesn't work here.  
Hello Splunkers ,   I am trying to build a query where I am using a transaction command which starts with MST and endswith Boot timestamp   | transaction host startswith="Starting MST" endswith="... See more...
Hello Splunkers ,   I am trying to build a query where I am using a transaction command which starts with MST and endswith Boot timestamp   | transaction host startswith="Starting MST" endswith="B timestamp:" maxspan=15m keepevicted=1   But in the raw events I have two events which starts with MST and I want to take the first one..but the transaction command takes the latest instead of earliest...how can I make the transaction command use earliest startswith event 2023-05-25T15:03:28.506750-07:00 ABC log-.sh[20252]: Starting MST (sftware) driver set 2023-05-25T15:03:38.455201-07:00 ABC log-.sh[22116]: Starting MST (sftware) driver set 2023-05-25T15:04:11.372010-07:00 ABC log-.sh[24041]: B timestamp: 2023-05-25 14:59:16 2023-05-25T15:04:11.367392-07:00 ABC log-.sh[24041]: SN: 16234567890 Thanks in Advance  
I have logs landing in Splunk Cloud that are normal `postfix_syslog` lines, but are wrapped in a `json` object. 3 Examples: {"line":"May 26 21:20:53 postfix postfix/smtpd[5654]: disconnect from ip-1... See more...
I have logs landing in Splunk Cloud that are normal `postfix_syslog` lines, but are wrapped in a `json` object. 3 Examples: {"line":"May 26 21:20:53 postfix postfix/smtpd[5654]: disconnect from ip-10-0-8-152.ec2.internal[10.0.8.152] commands=0/0","source":"stdout","tag":"c38633d4c285"} {"line":"May 26 20:54:03 postfix postfix/relay/smtp[5646]: 7EC2D34FCCBB3F9BF5AE0: to=\u003cuser@domain.com\u003e, relay=none, delay=265110, delays=265050/0.03/60/0, dsn=4.4.1, status=deferred (connect to otherdomain-com.mail.protection.outlook.com[104.47.66.10]:25: Connection timed out)","source":"stdout","tag":"c38633d4c285"} {"line":"May 26 18:48:19 postfix postfix/relay/smtp[188]: 785A2C8161D5BF5DB2B20: to=\u003cuser@domain.com\u003e, relay=anotherdomain-com.mail.protection.outlook.com[104.47.59.138]:25, delay=1.7, delays=0.14/0.03/0.32/1.2, dsn=2.6.0, status=sent (250 2.6.0 \u003c20230428184817.785A2C8161D5BF5DB2B20@postfix\u003e [InternalId=19529216330946, Hostname=serial.number.prod.outlook.com] 8233 bytes in 0.374, 21.462 KB/sec Queued mail for delivery)","source":"stdout","tag":"e6a9651d6930"} I would like to have the same fields for these logs extracted as if they were plain `postfix_syslog` lines. Simply setting source_type `postfix_syslog` does not work, a couple of fields get extracted, but most do not. How should I deal with this? Implement a source type that "calls" the `postfix_syslog` source_type on the value of the `line` json element? Write a custom source type that saves the value of the `line` element to a variable `actual_log_content`,  copy/paste all the configuration of the `postfix_syslog` source type but modify it to be looking at the `actual_log_content` variable? Go in and hack at the thing handing logs to Splunk to prevent it from json-wrapping the lines? What's the right way to cope?
Below is my original xml code for dashboard. from the panel of EPP TimeZone , i have modified the query using tstats, query is working fine, but when i compare with original xml code query i am not ... See more...
Below is my original xml code for dashboard. from the panel of EPP TimeZone , i have modified the query using tstats, query is working fine, but when i compare with original xml code query i am not able to pass tokens ((prodct="$eppProduct$") OR site="$eppProduct$")) in my tstats query. can anyone please help on this.   <form> <label>EPP Mode Dashboard</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="eppProduct" searchWhenChanged="true"> <label>Product</label> <fieldForLabel>all_product</fieldForLabel> <fieldForValue>all_product</fieldForValue> <search> <query> |tstats count where index=epp-prd-clc by site host host_ip |eval prodct= case(like(host, "%prod%"), "PROD", like(host, "%pat%"), "PAT", like(host, "%sit%"), "SIT", like(host, "%dev%"), "DEV") |stats count by site prodct |eval all_product=if(like(prodct, "PROD"), site, prodct)</query> <earliest> -4h@h </earliest> <latest>now</latest> </search> <default>*</default> <intialValue>*</intialValue> <choice value="*"> ALL </choice> </input> <input type="time" token "eppTime" searchWhenChanged="true" <label>Time</label> <default> <earliest> -60m@m </earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>EPP TimeZone</title> <chart> <title> Average Response Time</title> <search> <query> index=epp-prd-clc variable="ap" virginal="ssc" (prodct="$eppProduct$") OR site="$eppProduct$") deposit="calp" |eval Deposit=upper(deposit) |timechart avg(duration) as Duration |eval Duration=round(Duration,2)</query> <earliest> $eppTime.earliest$ </earliest> <latest>$eppTime.latest$</latest> </search> <option nmae="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisMiddle </option> <option nmae="charting.axisLabelsX.majorLabelStyle.rotation">0 </option> <option nmae="charting.chart"> line </option> <option nmae="charting.chart.nullValueMode"> zero </option> <option nmae="charting.chart.showDataLabels">minmax </option> <option nmae="charting.drilldown>all </option> <option nmae="charting.layout.splitSeries"> 1 </option> <option nmae="referesh.display"> none </option> </chart> </panel> </row> </form>   below is the modified xml dashboard code using tstats.   <form> <label>EPP Mode Dashboard</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="eppProduct" searchWhenChanged="true"> <label>Product</label> <fieldForLabel>all_product</fieldForLabel> <fieldForValue>all_product</fieldForValue> <search> <query> |tstats count where index=epp-prd-clc by site host host_ip |eval prodct= case(like(host, "%prod%"), "PROD", like(host, "%pat%"), "PAT", like(host, "%sit%"), "SIT", like(host, "%dev%"), "DEV") |stats count by site prodct |eval all_product=if(like(prodct, "PROD"), site, prodct)</query> <earliest> -4h@h </earliest> <latest>now</latest> </search> <default>*</default> <intialValue>*</intialValue> <choice value="*"> ALL </choice> </input> <input type="time" token "eppTime" searchWhenChanged="true" <label>Time</label> <default> <earliest> -60m@m </earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>EPP TimeZone</title> <chart> <title> Average Response Time</title> <search> <query> |tstats avg(duration) as Duration where index=epp-prd-clc TERM(variable) TERM("ap")TERM(virginal) TERM("ssc") TERM(deposit) TERM("calp") BY PREFIX(deposit:) _time |rename deposit: as Deposit |eval Deposit=upper(deposit) |timechart |eval Duration=round(Duration,2)</query> <earliest> $eppTime.earliest$ </earliest> <latest>$eppTime.latest$</latest> </search> <option nmae="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisMiddle </option> <option nmae="charting.axisLabelsX.majorLabelStyle.rotation">0 </option> <option nmae="charting.chart"> line </option> <option nmae="charting.chart.nullValueMode"> zero </option> <option nmae="charting.chart.showDataLabels">minmax </option> <option nmae="charting.drilldown>all </option> <option nmae="charting.layout.splitSeries"> 1 </option> <option nmae="referesh.display"> none </option> </chart> </panel> </row> </form>      
I have a splunk acct that i login to via vpn. The issue is that when i use the "search" area i cannot get an output but i when i use the "find" area for the same query, i get my output. Is there a wa... See more...
I have a splunk acct that i login to via vpn. The issue is that when i use the "search" area i cannot get an output but i when i use the "find" area for the same query, i get my output. Is there a way the change that? I just want to put my query in the "search" area to get my output.
How do i print the following service  status  count Gmdl        200      5 Aesp         200      13 abc           200       8 aesp        501       61 abc            501       22 Gmdl  ... See more...
How do i print the following service  status  count Gmdl        200      5 Aesp         200      13 abc           200       8 aesp        501       61 abc            501       22 Gmdl         400       11   as                 200        400     501 gmdl      5               11        0 aesp       13             61       0 abc          8              0          22  
I am trying to refine search based on a sub query, where sub query is not a filter of outer query. I need to check if certain event happend in the past time(which is different from outer query). S... See more...
I am trying to refine search based on a sub query, where sub query is not a filter of outer query. I need to check if certain event happend in the past time(which is different from outer query). Say current logline is :  "Timestamp 9am Log:Info found x=2$ on day1" I want to search something like this: app=my-app "found x=2$ on day1"  | eval isThereAEventBefore=(subQuery greater than 0, 1, 0) replace subQuery with:  (app=my-app "found x=*$ on day1 earliest=-1h" | stats count)   When i tried to write this query, i s: Error in 'eval' command: The expression is malformed. Expected ).  
I am trying to understand how I can plot my multi-cloud subscription/service consumption data from different geo regions, on a clustered choropleth map visualization. I have multi-cloud subscription... See more...
I am trying to understand how I can plot my multi-cloud subscription/service consumption data from different geo regions, on a clustered choropleth map visualization. I have multi-cloud subscriptions with services provisioned and consumed from different regions.  I want to know where to start with - from reading articles and documentation, I understand I should have longitude, latitude information in my data for each of the regions that I want to plot data for(at least, if not for all). None of my CSP data in respective indexes have this information. If I have to come up with a CSV, unsure  how I'll link them to get this to working.    Anyone came across similar use-case?   Any help would be appreciated.
  Hi Friends, We used image hyperlink in our project dashboard. It was working until yesterday. Today morning our Splunk cloud upgraded to 9.0.2303.101 new release.  After upgrade this image hyper... See more...
  Hi Friends, We used image hyperlink in our project dashboard. It was working until yesterday. Today morning our Splunk cloud upgraded to 9.0.2303.101 new release.  After upgrade this image hyper link not working. But string hyperlink working fine still. Could you please help me how to fix this issue in source code. This is my dashboard code: <button class="pg_button_launch"> <a class="pg_link" href="prime_global_os_base_metrics" target="_blank">OS Base Metrics</a> -- this line working fine. <a class="pg_link" href="prime_global_os_base_metrics" target="_blank"> <img class="baseImage" src="/static/app/PG_COMMON_LIBRARY/images/Dashboard2.png" style="height: 50px;"/> <img class="overlapImage" src="/static/app/PG_COMMON_LIBRARY/images/OpenNew.png"/> </a> --> this line not working after upgrade. </button>
Solution for charts : add this line : <option name="charting.seriesColors">[0x06D9C,0x4FA484,0xF59E63,0xB4595C,0x62B3B2,0x284B6A]</option>  
stream=stdout 9 INFO [DataEnrichmentController] (default task-597) start : comm-uuid : rsvp-service : nljnj42343n43k stream=stdout 4 INFO [DataEnrichmentController] (default task-760) start : commI... See more...
stream=stdout 9 INFO [DataEnrichmentController] (default task-597) start : comm-uuid : rsvp-service : nljnj42343n43k stream=stdout 4 INFO [DataEnrichmentController] (default task-760) start : commID : rsvp-service : nk324kjln4kj34 stream=stdout 4 INFO [DataEnrichmentController] (default task-760) start : comm-uuid : rsvp-service : vflijiopjoe1442kljn;k23 I want to extract the highlighted word from above log lines
I'd like to query Splunk with the SDK. I'm using the free version, after switching from a trial license. I cannot get it to work. This page seems to suggest it is possible (https://haydz.github.i... See more...
I'd like to query Splunk with the SDK. I'm using the free version, after switching from a trial license. I cannot get it to work. This page seems to suggest it is possible (https://haydz.github.io/2021/01/02/Python-Connect-Splunk.html) Are there certain steps that need to be taken to get this working?
When I install Splunk Enterprise or Splunk universal forwarder on Linux I note that by default making a new user named splunk right? And then I enable boot-start for this user and all things are go... See more...
When I install Splunk Enterprise or Splunk universal forwarder on Linux I note that by default making a new user named splunk right? And then I enable boot-start for this user and all things are good But on Windows, there is no default user made named splunk so should I make the user manually, then enable boot-start from this user? Also, how can I know who is the user who running splunk on my Windows machine? Even if it's Splunk Enterprise or Splunk Universal Forwarder        
hi team, could you help me build when a user manages to block his password 3 times in a 24 hour period generate a trigger? this for each user.. I'm thinking of something like this.   index="mai... See more...
hi team, could you help me build when a user manages to block his password 3 times in a 24 hour period generate a trigger? this for each user.. I'm thinking of something like this.   index="main" source="wineventlog:security" EventCode=4740 earliest=-25h | rex field=_raw "(?<Account>Account That Was Locked Out:)" | search NOT Account_Name="Guest" | eval Period=if(_time>relative_time(now(),"-1h"),"New","Old" ) | stats count values(Period) as Period by acct_name | where mvcount(Period)=1 AND Period="New" AND count >= 3 | sort -count | head 10 | fields -Period   but apparently it doesn't work.
Hello all, So I need help with a file corruption issue on my SC4S servers. I've had 2 SC4S servers running for several months with no issues.  Recently the contents in the /opt/sc4s/env_file has my... See more...
Hello all, So I need help with a file corruption issue on my SC4S servers. I've had 2 SC4S servers running for several months with no issues.  Recently the contents in the /opt/sc4s/env_file has mysteriously changed that caused both SC4S servers to stop forwarding traffic, at different times.  My Linux admin confirmed no one manually changed the file, so I can't figure out how this is happening.  Has this happened to anyone else and if so how did you identify and fix the problem? Thank you all in advance.
When I'm creating search query by using datamodel endpoint, unable to get result from that use case or search query.  For example:   datamodel=Endpoint. Processes where Processes. process_ name I... See more...
When I'm creating search query by using datamodel endpoint, unable to get result from that use case or search query.  For example:   datamodel=Endpoint. Processes where Processes. process_ name IN("cat", "nano*") AND Processes. process IN("*/etc/shadow*", "*/etc. /passwd*")  Here Processes is dataset, I only can see count of processes in numbers when I'm checking data in the pivot table.  
I am facing alignment issues while importing dashboard in studio. Can somebody help in this.
Hi, We are using SaaS controller build 23.4.0-1559 and all our browser jobs running on our local PSA fail with error "Invalid measurement status state when publishing results: Failed to save result"... See more...
Hi, We are using SaaS controller build 23.4.0-1559 and all our browser jobs running on our local PSA fail with error "Invalid measurement status state when publishing results: Failed to save result". We have tried using both an older PSA agent and the latest one, but are getting the same error. Any work arounds for this issue? Thanks, RB