All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi,  we have Splunk (v9.2) in a clustered environment that manages tons of different logs from a complex and varied network. There are a few departments that have a Sophos firewall each, that sends... See more...
Hi,  we have Splunk (v9.2) in a clustered environment that manages tons of different logs from a complex and varied network. There are a few departments that have a Sophos firewall each, that sends logs through syslog (we would have used a UF, but we couldn't because the IT security can't touch those servers). In order to split the inputs based on the source type, we set those Sophos logs to be sent to port 513 of one of our HFs and created an app to parse those through the use of a regex. The goal was to reduce the logs and save license usage. So far, so good... Everything was working as intended... Until... As it turns out, every night, exactly at midnight, the Heavy Forwarder stops the collection from those sources (only those) and nothing is indexed, until someone gives a restart to the splunkd service (which could be potentially never) and gives new life to the collector. Here's the odd part: during the no-collection time, tcpdump shows the reception of syslog data through the port 513, so the firewall never stops sending data to the HF, but no logs are indexed. Only after a restart we can see the logs are indexed again. The Heavy Forwarder at issue sits on top of a Ubuntu 22 LTS minimized server edition. Here are the app configuration files: - inputs.conf [udp:513] sourcetype = syslog no_appending_timestamp = true index = generic_fw   - props.conf [source::udp:513] TRANSFORMS-null = nullQ TRANSFORMS-soph = sophos_q_fw, sophos_w_fw, null_ip   - transforms.conf [sophos_q_fw] REGEX = hostname\sulogd\[\d+\]\:.*action=\"accept\".*initf=\"eth0\".* DEST_KEY = queue FORMAT = indexQueue # [sophos_w_fw] REGEX = hostname\sulogd\[\d+\]\:.*action=\"accept\".*initf=\"eth0\".* DEST_KEY = _MetaData:Index FORMAT = custom_sophos # [null_ip] REGEX = dstip=\"192\.168\.1\.122\" DEST_KEY = queue FORMAT = nullQueue   We didn't see anything out of the ordinary in the pocesses that start at midnight on the HF. At this point we have no clue about what's happening. How can we troubleshoot this situation? Thanks
Hi @Roberto.Barnes, Thanks for asking your question on the community. Since it's been a few days with no reply, you can reach out to AppDynamics Support for more help, or even try reaching out to y... See more...
Hi @Roberto.Barnes, Thanks for asking your question on the community. Since it's been a few days with no reply, you can reach out to AppDynamics Support for more help, or even try reaching out to your AppD Rep with this particular question. How do I submit a Support ticket? An FAQ 
Hi @niketn! The steps to move the lookup editor code into my own app worked great, thank you! In light mode, everything is working perfectly. However, for dashboards that are in dark mode, the edita... See more...
Hi @niketn! The steps to move the lookup editor code into my own app worked great, thank you! In light mode, everything is working perfectly. However, for dashboards that are in dark mode, the editable lookup tables are being displayed as white text on a white background. Do you (or anyone else) have any tips on how to make the lookup editor tables display correctly in dark mode? Even just changing the background colour or text colour of the embedded editable lookup would be really helpful. Thank you so much for any help!
I'm sure you've already solved this one, but maybe it'll help someone else down the line We've ran into a similar issue lately - in our case it was caused by a pre-9.x copy of search.xml in $SPLUNK_... See more...
I'm sure you've already solved this one, but maybe it'll help someone else down the line We've ran into a similar issue lately - in our case it was caused by a pre-9.x copy of search.xml in $SPLUNK_HOME/etc/apps/search/local/ui/dashboards    
Splunk cannot search what it doesn't have.
Try to do this: Open the Dashboard in Edit Mode: Navigate to the dashboard you want to edit. Click on the "Edit" button to enter the edit mode. Access the Source Code: In the top right corn... See more...
Try to do this: Open the Dashboard in Edit Mode: Navigate to the dashboard you want to edit. Click on the "Edit" button to enter the edit mode. Access the Source Code: In the top right corner of the dashboard editor, click on the "Source" button to open the JSON source code of the dashboard. Add Custom CSS: Insert a custom CSS block within the JSON to target and hide the export and full-screen icons. To add custom CSS, you need to define a css block within the options field of your dashboard's JSON configuration. Here's a sample of how you can add the custom CSS:   json Copia codice { "type": "dashboard", "title": "Your Dashboard Title", "options": { "css": ".dashboard-panel .dashboard-panel-action { display: none !important; }" }, "visualizations": [ { "type": "icon", "options": { "title": "Your Icon Title", "drilldown": { "type": "link", "dashboard": "linked_dashboard_name" } } } // Add other visualizations here ] } Save and Verify: Save the changes to the dashboard. Verify that the export and full-screen icons no longer appear when hovering over the icon. you can see if this works
But if the VM does not log those Data so far it is not possible?
@Santosh2, Glad to hear that the solution seemed to be working. It would be great if you can accept the answer as a solution so that it helps other community users.
Hi @BB_MW , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
@tej57 Got it, thank you
inputs.conf: [monitor:///var/log/json] disabled = 0 index = app_prod sourcetype = app-json crcSalt = <SOURCE> there is no props.conf events: 1712744099:{"jsonefd":"1.0","result":"1357","i... See more...
inputs.conf: [monitor:///var/log/json] disabled = 0 index = app_prod sourcetype = app-json crcSalt = <SOURCE> there is no props.conf events: 1712744099:{"jsonefd":"1.0","result":"1357","id":1} 1712744400:{"jsonefd":"1.0","result":"1357","id":1} 1712745680:{"jsonefd":"1.0","result":"1357","id":1} 1714518017:{"jsonefd":"1.0","result":"1378","id":1} 1715299221:{"jsonefd":"1.0","result":"1366","id":1} As you said i searched with all time and no results found.
Sure @gcusello  Sample event: { [-]    application: uslcc-nonprod    cluster: AKS-SYD-NPDI1-ESE-2    container_id: 9ae09dba5f0ca4c75dfxxxxxxb6b1824ec753663f02d832cf5bfb6f0dxxxxxxx    containe... See more...
Sure @gcusello  Sample event: { [-]    application: uslcc-nonprod    cluster: AKS-SYD-NPDI1-ESE-2    container_id: 9ae09dba5f0ca4c75dfxxxxxxb6b1824ec753663f02d832cf5bfb6f0dxxxxxxx    container_image: acrsydnpdi1ese.azurecr.io/ms-usl-acct-maint:snapshot-a23584a1221b57xxxxxb437d80xxxxxxb6e65    container_name: ms-usl-acct-maint    level: INFO    log: 2024-05-06 11:08:40.385 INFO 26 --- [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] [CID:Perf-May06-9-151615] l.AccountCreditLimitChangedKafkaListener : message="xxxxx listener 'account credit limit event enrichment'", elapsed_time_ms="124"    namespace: uslcc-nonprod    node_name: aks-application-3522xxxxx-vmss0000xl    pod_ip: 10.209.82.xxx    pod_name: ms-usl-acct-maint-ppte-7dc7xxxxxx-2fc58    tenant: uslcc    timestamp: 2024-05-06 11:08:40.385 }   Raw:   {"log":"2024-05-06 11:08:40.385 INFO 26 --- [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] [CID:Perf-May06-9-151615] l.AccountCreditLimitChangedxxxxxListener : message=\"xxxxx listener 'account credit limit event enrichment'\", elapsed_time_ms=\"124\"","application":"uslcc-nonprod","cluster":"AKS-SYD-NPDI1-ESE-2","namespace":"uslcc-nonprod","tenant":"uslcc","timestamp":"2024-05-06 11:08:40.385","level":"INFO","container_id":"9ae09dba5xxxxxfd2724b6b1824ec753663f02dxxxxxf0d55d59940","container_name":"ms-usl-acct-maint","container_image":"acrsydnpdi1ese.azurecr.io/ms-usl-acct-maint:snapshot-a23584a1221b5749xxxxxd803eb2aabaxxxxx5","pod_name":"ms-usl-acct-maint-ppte-7dc7c9xxxxc58","pod_ip":"10.209.82.xxx","node_name":"aks-application-35229300-vmssxxxxxl"}
I found the issue - there was a rogue local/props.conf in a completely unrelated app that had all sorts of EXTRACTS, FIELDALIASES, etc but, crucially, no stanza spec! One of the extractions defined a... See more...
I found the issue - there was a rogue local/props.conf in a completely unrelated app that had all sorts of EXTRACTS, FIELDALIASES, etc but, crucially, no stanza spec! One of the extractions defined a field called 'action' with a regex that didn't match anything in my raw event and, because the rogue extract had a higher precendance, my extract didn't get populated. Rather than ignoring the defined props statements, Splunk applied the EXTRACTS, etc to everything!  Removing the props.conf solved my issue and everything is good.  
I'm trying to run personal scripts in Splunk from a dashboard. I want the dashboard to call a script by user input and then output the script to a table. I'm testing the ability with a Python script ... See more...
I'm trying to run personal scripts in Splunk from a dashboard. I want the dashboard to call a script by user input and then output the script to a table. I'm testing the ability with a Python script that calls a PowerShell script, returns the data to the Python script, and then returns the data to the Splunk dashboard. This is what I have so far:  Test_PowerShell.py Python Script:    import splunk.Intersplunk import sys import subprocess results,unused1,unused2 = splunk.Intersplunk.getOrganizedResults() # Define the path to the PowerShell script ps_script_path = "./Test.ps1" # Define the argument to pass to the PowerShell script argument = sys.argv[1] # Execute the PowerShell script with the argument results = subprocess.run(['powershell.exe', '-File', ps_script_path, argument], capture_output=True, text=True) splunk.Intersplunk.outputResults(results)   Page XML:    <form version="1.1" theme="dark"> <label>Compliance TEST</label> <description>TESTING</description> <fieldset submitButton="false" autoRun="false"></fieldset> <row> <panel> <title>Input Panel</title> <input type="text" token="user_input"> <label>User Input:</label> <default>*</default> </input> </panel> </row> <row> <panel> <title>Script Output</title> <table> <search> <query>| script python testps $user_input$ | table field1</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>   Test.ps1 PowerShell Script:    Write-Host $args[0]   commands.conf:   [testps] filename = Test_PowerShell.py streaming=true python.version = python3   default.meta   [commands/testps] access = read : [ * ], write : [ admin ] export = system [scripts/Test_PowerShell.py] access = read : [ * ], write : [ admin ] export = system   The error I'm getting is the following: External search command 'testps' returned error code 1. 
Hello,  I have the same problem, can we do a rollback installation ? another idea ?  
Is this actual WARN log message you found? If yes,  what was the reason for back-pressure?
Please share the inputs.conf and props.conf stanzas related to the input. Have you searched the last chance index (usually 'main')?  Have you searched all time, including the future, in case the tim... See more...
Please share the inputs.conf and props.conf stanzas related to the input. Have you searched the last chance index (usually 'main')?  Have you searched all time, including the future, in case the timestamps are not interpreted correctly?
Yes, it is possible.  If the VM or identity provider logs failed logins to Splunk then you can search those events for multiple attempts within a given timeframe.
We can't have more than one email action and it has nothing to do with sendemail.py. Splunk does not allow more than one config file stanza with the same name.  If it finds more than one they are me... See more...
We can't have more than one email action and it has nothing to do with sendemail.py. Splunk does not allow more than one config file stanza with the same name.  If it finds more than one they are merged into a single stanza.
That's great feedback. We will add output group.