All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, After upgrading from Classic to Victoria Experience on our Splunk Cloud stack, we have encountered issues retrieving data from AWS SQS-based S3. The inputs remained after the migration, but f... See more...
Hello, After upgrading from Classic to Victoria Experience on our Splunk Cloud stack, we have encountered issues retrieving data from AWS SQS-based S3. The inputs remained after the migration, but for some, it seems the SQS queue name is missing. When we try to configure these inputs, we immediately receive a 404 error in the python.log. Please see the screenshot below for reference. Furthermore, the error message indicates that the SQS queue may not be present in the given region. However, we have confirmed that the queue does exist in the specified region. Has anyone else experienced this issue and can offer assistance? Thank you.
@phanTom    could you please help me with documentation for reference    
so use eval and transform the epoch value to the desired tz? i haven't found a built in Splunk function for that, just threads like this that use the offset, but since that changes from 5 to 6 hou... See more...
so use eval and transform the epoch value to the desired tz? i haven't found a built in Splunk function for that, just threads like this that use the offset, but since that changes from 5 to 6 hours with daylight savings, do you know of one that supports 'cst6cdt'? and thank you!  overall that approach makes sense to me.  pass something (make something to pass) other than the click.value.  
Missing props could be a problem.  Try these settings. [app:json] TIME_PREFIX = ^ TIME_FORMAT = %s LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = FALSE MAX_TIMESTAMP_LOOKAHEAD = 10 TRUNCATE = 10000 EVE... See more...
Missing props could be a problem.  Try these settings. [app:json] TIME_PREFIX = ^ TIME_FORMAT = %s LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = FALSE MAX_TIMESTAMP_LOOKAHEAD = 10 TRUNCATE = 10000 EVENT_BREAKER_ENABLE = TRUE EVENT_BREAKER = ([\r\n]+) Note the change in sourcetype name.  Avoid using hyphens in identifiers since they could be mistaken for the subtraction operator. By default, Splunk will not search future times so it won't detect timestamps that were misinterpreted in that direction.  Try index=app_prod earliest=-1y latest=+1y
My current search is something like this: (index=1 sourcetype="x") OR (index=2 sourcetype="y" "some extra filters") "*(a name like RU3NDS just for testing but there are many like this one)*" | eval... See more...
My current search is something like this: (index=1 sourcetype="x") OR (index=2 sourcetype="y" "some extra filters") "*(a name like RU3NDS just for testing but there are many like this one)*" | eval joined_name = upper(coalesce(NAME, name))  NAME(upper) is from the index 1, and name(lower) from index 2 and this gave me table like this: joined_name others values from index 1 others values from index 2 H-RU3NDS_DAT_CDSD231_01   ... H-RU3NDS_DAT_CDSD231_02   ... RU3NDS ...   The first two values are from index number 2 and the third is from index number 1 And I need to join this first column as a unique value like RU3NDS, I've tried rex command too, but didn't work 
Has anyone noticed the push notifications through the Splunk Mobile app has stopped working recently. We are using Spunk on prem, Splunk Secure Gateway set up with prod.spacebridge.spl.mobi set as t... See more...
Has anyone noticed the push notifications through the Splunk Mobile app has stopped working recently. We are using Spunk on prem, Splunk Secure Gateway set up with prod.spacebridge.spl.mobi set as the Gateway but I noticed the notifications stopped appearing on my home screen of when my iPhone was locked. Other colleagues using different devices are complaining of the same issue.    I can't remember the exact date but it may have been around the 3rd May.   No changes to our config have been made but i'd be interested to know if anyone else is having this issue.
@harishlnu  If a Correlation Search is configured to send to SOAR then you just need the _internal logs for the modaction send_to_phantom to be checked for failures in sending then also use the in... See more...
@harishlnu  If a Correlation Search is configured to send to SOAR then you just need the _internal logs for the modaction send_to_phantom to be checked for failures in sending then also use the ingestd.log to look for failures to ingest on the SOAR side. The ingestd.log should be one of the DAEMON logs you can forward from SOAR to Splunk. 
Hi,  we have Splunk (v9.2) in a clustered environment that manages tons of different logs from a complex and varied network. There are a few departments that have a Sophos firewall each, that sends... See more...
Hi,  we have Splunk (v9.2) in a clustered environment that manages tons of different logs from a complex and varied network. There are a few departments that have a Sophos firewall each, that sends logs through syslog (we would have used a UF, but we couldn't because the IT security can't touch those servers). In order to split the inputs based on the source type, we set those Sophos logs to be sent to port 513 of one of our HFs and created an app to parse those through the use of a regex. The goal was to reduce the logs and save license usage. So far, so good... Everything was working as intended... Until... As it turns out, every night, exactly at midnight, the Heavy Forwarder stops the collection from those sources (only those) and nothing is indexed, until someone gives a restart to the splunkd service (which could be potentially never) and gives new life to the collector. Here's the odd part: during the no-collection time, tcpdump shows the reception of syslog data through the port 513, so the firewall never stops sending data to the HF, but no logs are indexed. Only after a restart we can see the logs are indexed again. The Heavy Forwarder at issue sits on top of a Ubuntu 22 LTS minimized server edition. Here are the app configuration files: - inputs.conf [udp:513] sourcetype = syslog no_appending_timestamp = true index = generic_fw   - props.conf [source::udp:513] TRANSFORMS-null = nullQ TRANSFORMS-soph = sophos_q_fw, sophos_w_fw, null_ip   - transforms.conf [sophos_q_fw] REGEX = hostname\sulogd\[\d+\]\:.*action=\"accept\".*initf=\"eth0\".* DEST_KEY = queue FORMAT = indexQueue # [sophos_w_fw] REGEX = hostname\sulogd\[\d+\]\:.*action=\"accept\".*initf=\"eth0\".* DEST_KEY = _MetaData:Index FORMAT = custom_sophos # [null_ip] REGEX = dstip=\"192\.168\.1\.122\" DEST_KEY = queue FORMAT = nullQueue   We didn't see anything out of the ordinary in the pocesses that start at midnight on the HF. At this point we have no clue about what's happening. How can we troubleshoot this situation? Thanks
Hi @Roberto.Barnes, Thanks for asking your question on the community. Since it's been a few days with no reply, you can reach out to AppDynamics Support for more help, or even try reaching out to y... See more...
Hi @Roberto.Barnes, Thanks for asking your question on the community. Since it's been a few days with no reply, you can reach out to AppDynamics Support for more help, or even try reaching out to your AppD Rep with this particular question. How do I submit a Support ticket? An FAQ 
Hi @niketn! The steps to move the lookup editor code into my own app worked great, thank you! In light mode, everything is working perfectly. However, for dashboards that are in dark mode, the edita... See more...
Hi @niketn! The steps to move the lookup editor code into my own app worked great, thank you! In light mode, everything is working perfectly. However, for dashboards that are in dark mode, the editable lookup tables are being displayed as white text on a white background. Do you (or anyone else) have any tips on how to make the lookup editor tables display correctly in dark mode? Even just changing the background colour or text colour of the embedded editable lookup would be really helpful. Thank you so much for any help!
I'm sure you've already solved this one, but maybe it'll help someone else down the line We've ran into a similar issue lately - in our case it was caused by a pre-9.x copy of search.xml in $SPLUNK_... See more...
I'm sure you've already solved this one, but maybe it'll help someone else down the line We've ran into a similar issue lately - in our case it was caused by a pre-9.x copy of search.xml in $SPLUNK_HOME/etc/apps/search/local/ui/dashboards    
Splunk cannot search what it doesn't have.
Try to do this: Open the Dashboard in Edit Mode: Navigate to the dashboard you want to edit. Click on the "Edit" button to enter the edit mode. Access the Source Code: In the top right corn... See more...
Try to do this: Open the Dashboard in Edit Mode: Navigate to the dashboard you want to edit. Click on the "Edit" button to enter the edit mode. Access the Source Code: In the top right corner of the dashboard editor, click on the "Source" button to open the JSON source code of the dashboard. Add Custom CSS: Insert a custom CSS block within the JSON to target and hide the export and full-screen icons. To add custom CSS, you need to define a css block within the options field of your dashboard's JSON configuration. Here's a sample of how you can add the custom CSS:   json Copia codice { "type": "dashboard", "title": "Your Dashboard Title", "options": { "css": ".dashboard-panel .dashboard-panel-action { display: none !important; }" }, "visualizations": [ { "type": "icon", "options": { "title": "Your Icon Title", "drilldown": { "type": "link", "dashboard": "linked_dashboard_name" } } } // Add other visualizations here ] } Save and Verify: Save the changes to the dashboard. Verify that the export and full-screen icons no longer appear when hovering over the icon. you can see if this works
But if the VM does not log those Data so far it is not possible?
@Santosh2, Glad to hear that the solution seemed to be working. It would be great if you can accept the answer as a solution so that it helps other community users.
Hi @BB_MW , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
@tej57 Got it, thank you
inputs.conf: [monitor:///var/log/json] disabled = 0 index = app_prod sourcetype = app-json crcSalt = <SOURCE> there is no props.conf events: 1712744099:{"jsonefd":"1.0","result":"1357","i... See more...
inputs.conf: [monitor:///var/log/json] disabled = 0 index = app_prod sourcetype = app-json crcSalt = <SOURCE> there is no props.conf events: 1712744099:{"jsonefd":"1.0","result":"1357","id":1} 1712744400:{"jsonefd":"1.0","result":"1357","id":1} 1712745680:{"jsonefd":"1.0","result":"1357","id":1} 1714518017:{"jsonefd":"1.0","result":"1378","id":1} 1715299221:{"jsonefd":"1.0","result":"1366","id":1} As you said i searched with all time and no results found.
Sure @gcusello  Sample event: { [-]    application: uslcc-nonprod    cluster: AKS-SYD-NPDI1-ESE-2    container_id: 9ae09dba5f0ca4c75dfxxxxxxb6b1824ec753663f02d832cf5bfb6f0dxxxxxxx    containe... See more...
Sure @gcusello  Sample event: { [-]    application: uslcc-nonprod    cluster: AKS-SYD-NPDI1-ESE-2    container_id: 9ae09dba5f0ca4c75dfxxxxxxb6b1824ec753663f02d832cf5bfb6f0dxxxxxxx    container_image: acrsydnpdi1ese.azurecr.io/ms-usl-acct-maint:snapshot-a23584a1221b57xxxxxb437d80xxxxxxb6e65    container_name: ms-usl-acct-maint    level: INFO    log: 2024-05-06 11:08:40.385 INFO 26 --- [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] [CID:Perf-May06-9-151615] l.AccountCreditLimitChangedKafkaListener : message="xxxxx listener 'account credit limit event enrichment'", elapsed_time_ms="124"    namespace: uslcc-nonprod    node_name: aks-application-3522xxxxx-vmss0000xl    pod_ip: 10.209.82.xxx    pod_name: ms-usl-acct-maint-ppte-7dc7xxxxxx-2fc58    tenant: uslcc    timestamp: 2024-05-06 11:08:40.385 }   Raw:   {"log":"2024-05-06 11:08:40.385 INFO 26 --- [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] [CID:Perf-May06-9-151615] l.AccountCreditLimitChangedxxxxxListener : message=\"xxxxx listener 'account credit limit event enrichment'\", elapsed_time_ms=\"124\"","application":"uslcc-nonprod","cluster":"AKS-SYD-NPDI1-ESE-2","namespace":"uslcc-nonprod","tenant":"uslcc","timestamp":"2024-05-06 11:08:40.385","level":"INFO","container_id":"9ae09dba5xxxxxfd2724b6b1824ec753663f02dxxxxxf0d55d59940","container_name":"ms-usl-acct-maint","container_image":"acrsydnpdi1ese.azurecr.io/ms-usl-acct-maint:snapshot-a23584a1221b5749xxxxxd803eb2aabaxxxxx5","pod_name":"ms-usl-acct-maint-ppte-7dc7c9xxxxc58","pod_ip":"10.209.82.xxx","node_name":"aks-application-35229300-vmssxxxxxl"}
I found the issue - there was a rogue local/props.conf in a completely unrelated app that had all sorts of EXTRACTS, FIELDALIASES, etc but, crucially, no stanza spec! One of the extractions defined a... See more...
I found the issue - there was a rogue local/props.conf in a completely unrelated app that had all sorts of EXTRACTS, FIELDALIASES, etc but, crucially, no stanza spec! One of the extractions defined a field called 'action' with a regex that didn't match anything in my raw event and, because the rogue extract had a higher precendance, my extract didn't get populated. Rather than ignoring the defined props statements, Splunk applied the EXTRACTS, etc to everything!  Removing the props.conf solved my issue and everything is good.