All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@rksharma2808  Check this https://www.servicenow.com/community/developer-forum/unable-to-create-incidents-via-splunk-add-on-for-servicenow/m-p/2815690 
Hi @rksharma2808  Are you able to change the log level to DEBUG to see if this presents some different logs? Also - do you get an error when setting up the account in the Service Now app, or wh... See more...
Hi @rksharma2808  Are you able to change the log level to DEBUG to see if this presents some different logs? Also - do you get an error when setting up the account in the Service Now app, or when an input runs? Do you have any logs created with a name like "splunk_ta_snow_main.log" with any useful information? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will   
See my answer to a similar question https://community.splunk.com/t5/Dashboards-Visualizations/Dashboard-CSS-Width-setup-doesn-t-work-anymore-with-9-x-version/m-p/712713/highlight/true#M58300
@rksharma2808  The 500 Internal Server Error from ServiceNow when trying to create a ticket usually indicates an issue on the ServiceNow side rather than Splunk.  Ensure the endpoint is accessible ... See more...
@rksharma2808  The 500 Internal Server Error from ServiceNow when trying to create a ticket usually indicates an issue on the ServiceNow side rather than Splunk.  Ensure the endpoint is accessible from Splunk (e.g., test via curl or Postman). A 500 error can occur if the payload sent to ServiceNow is malformed or missing required fields. Cross-check the payload fields with ServiceNow's API documentation for ticket creation. If possible, log the payload being sent by Splunk and manually test it using Postman or curl to identify the exact issue. I would recommend you to setup a call with the ServiceNow team and fix the issue.     
Try something like this (depending on where your hidden/style panel is in the row) <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #MasterRow div:nt... See more...
Try something like this (depending on where your hidden/style panel is in the row) <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #MasterRow div:nth-child(2).dashboard-cell {width:15% !important;} #MasterRow div:nth-child(3).dashboard-cell {width:85% !important;} </style> </html> </panel> <panel id="Panel1">....</panel> <panel id="Panel2">....</panel> </row>
hello Kiran, Thank you  we tried generating new token    log_level=ERROR pid=403773 tid=Thread-1 file=snow_ticket.py:_handle_response:572 | [invocation_id=d1d96adc92a7437e907573c9d8226bcb] Failed... See more...
hello Kiran, Thank you  we tried generating new token    log_level=ERROR pid=403773 tid=Thread-1 file=snow_ticket.py:_handle_response:572 | [invocation_id=d1d96adc92a7437e907573c9d8226bcb] Failed to create ticket. Return code is 500 (Internal Server Error).
@rksharma2808  As the error message suggests, try regenerating the access token. This can often resolve the issue if the token has expired. Ensure that the new access token has a sufficient expiry t... See more...
@rksharma2808  As the error message suggests, try regenerating the access token. This can often resolve the issue if the token has expired. Ensure that the new access token has a sufficient expiry time. Sometimes, tokens are set to expire too quickly, causing frequent issues. If you are hitting API rate limits, ServiceNow might invalidate the token. Verify with your ServiceNow admin if rate limits are being enforced.
@gcusello Thanks
I have integrated splunk wtih servicenow , am getting below error log_level=ERROR pid=531305 tid=MainThread file=snow_data_loader.py:_do_collect:538 | Failure potentially caused by expired access tok... See more...
I have integrated splunk wtih servicenow , am getting below error log_level=ERROR pid=531305 tid=MainThread file=snow_data_loader.py:_do_collect:538 | Failure potentially caused by expired access token. Regenerating access token
@gcusello  Sorry, I might have confused you. Let me try to illustrate this clearly. I have server.host — this is where the test_log.json log is being collected. There is also splunk.test.host — I ... See more...
@gcusello  Sorry, I might have confused you. Let me try to illustrate this clearly. I have server.host — this is where the test_log.json log is being collected. There is also splunk.test.host — I configured a Data Input, opened port 765 TCP, assigned it the index test_index, and set the sourcetype to _json. The setup on the splunk.test.host side is complete, and all network access is in place. Now, on the server.host side: In /etc/rsyslog.d/, I created a file called send_splunk.conf. In this config file, I specify the address splunk.test.host, port 765, and the TCP protocol. However, I’m having trouble correctly configuring /etc/rsyslog.d/send_splunk.conf so that rsyslog reads the test_log.json file and sends each new line to Splunk as it appears in the file.
Hi @gitingua , you did a little confusion: do you want to ingest syslogs using rsyslog or TCP Input? They are two different ways to ingest syslogs: using rsyslog, you use rsyslog to ingest logs an... See more...
Hi @gitingua , you did a little confusion: do you want to ingest syslogs using rsyslog or TCP Input? They are two different ways to ingest syslogs: using rsyslog, you use rsyslog to ingest logs and write them in a text file that you have to read using a File input. Using TCP Input, you configure the input in Splunk without using rsyslog and you directly forward them to Splunk. The second solution is easier to implement but has the problem that can run only when Splunk is up, e.g. during restart you loose syslogs. For this reason the solution using rsyslog is prefeable even if you have to configure: at first the rsyslog (for more infos see at https://www.rsyslog.com/doc/index.html ) and then the Splunk file input (for more infos see at https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/Monitorfilesanddirectorieswithinputs.conf ) At least, having a json format, remember to add INDEXED_EXTRACTIONS= JSON to your props.conf, in this way you automatically extract all the fields. Ciao. Giuseppe
Hi @JIreland  This will entirely depend on the sourcetypes that you are feeding in to Splunk. What is the source of your Audit data? There may be some pre-build dashboards for some of these in apps... See more...
Hi @JIreland  This will entirely depend on the sourcetypes that you are feeding in to Splunk. What is the source of your Audit data? There may be some pre-build dashboards for some of these in apps specific to the type of data you are bringing in. Otherwise it might be a case of working through the data to put together each of these. When you are putting together the dashboards, bear in mind that sometimes things within dashboards can be missed or overlooked if crowded, and that things like alerts may be more suitable for rare events that you need to know about. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
Hi colleagues, hope everyone is doing well! I need some advice. I have a server that writes logs to /var/log/test_log.json. On the Splunk side, I opened a port via "Data Input -> TCP". The logs in... See more...
Hi colleagues, hope everyone is doing well! I need some advice. I have a server that writes logs to /var/log/test_log.json. On the Splunk side, I opened a port via "Data Input -> TCP". The logs in  test_log.json are written line by line. Example: {"timestamp":"2025/02/27 00:00:15","description":"Event 1"} {"timestamp":"2025/02/27 00:00:16","description":"Event 2"} {"timestamp":"2025/02/27 00:00:17","description":"Event 3"} Could anyone suggest if they have a ready-made rsyslog configuration file for correctly reading this log file? The file is continuously updated with new logs, each on a new line. I want rsyslog to read the file and send each newly appearing line as a separate log. Has anyone encountered this before and could help with a ready-made rsyslog configuration? Thank you!
Hi @kjehth93  In order to specify which hosts this goes to, you probably need to look at your Deployment Server configuration - are you already using this to deploy an app with the inputs.conf in? ... See more...
Hi @kjehth93  In order to specify which hosts this goes to, you probably need to look at your Deployment Server configuration - are you already using this to deploy an app with the inputs.conf in? Place the app in /opt/splunk/etc/deployment-apps/<yourAppName> Go to https://yourSplunkInstance/en-US/manager/system/deploymentserver On the "Server Class" tab select "New Server Class", and give it a name. Then proceed to add your App, and then head to add Clients. When adding clients you can use wildcards alongwith IPs and/or hostnames in an allow/deny approach to target the hosts you'd like to deploy this inputs.conf to. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Ryan, Thank you for the quick reply. It looks like there is no way, via an API, to pull the Dashboard's share URL. I am pursuing a possible solution with an SQL query to the appD database and see wh... See more...
Ryan, Thank you for the quick reply. It looks like there is no way, via an API, to pull the Dashboard's share URL. I am pursuing a possible solution with an SQL query to the appD database and see what that can provide. 
I would like to run powershell scripts and commands out to my endpoints via the Universal Forwarder, but based on the script or command i would like to specifiy which endpoint it goes to/which it col... See more...
I would like to run powershell scripts and commands out to my endpoints via the Universal Forwarder, but based on the script or command i would like to specifiy which endpoint it goes to/which it collects an output from. I have attempted this with the following entry in the local inputs.conf, but it still ran on all the endpoints. [powershell://find_version]       script = [powershell command here] host = [XXX] index = [index here] schedule = [cron here] disabled = 0
I'm having a similar issue with the Egnyte Collaborate TA - https://splunkbase.splunk.com/app/5653.  When trying to use the add-on - I keep getting the following error:  “01-22-2025 16:45:55.409 +0... See more...
I'm having a similar issue with the Egnyte Collaborate TA - https://splunkbase.splunk.com/app/5653.  When trying to use the add-on - I keep getting the following error:  “01-22-2025 16:45:55.409 +0000 ERROR PersistentScript [1693337 PersistentScriptIo] - From {/opt/splunk/bin/python3.9 /opt/splunk/etc/apps/TA-egnyte-connect/bin/TA_egnyte_connect_rh_settings.py persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#TA-egnyte-connect#configs/conf-ta_egnyte_connect_settings, user=proxy.” The issue is I'm running Splunk Cloud - and don't have the ability to modify local conf files. Any ideas on how to get this resolved for Slunk Cloud customers? Cheers.
Hello, Newb here trying to get up to speed... I need to create dashboards that will allow me to perform the audit events listed in the JSIG: 1. Authentication events:     (1) Logons (Success/Fail... See more...
Hello, Newb here trying to get up to speed... I need to create dashboards that will allow me to perform the audit events listed in the JSIG: 1. Authentication events:     (1) Logons (Success/Failure)     (2) Logoffs (Success) 2. Security Relevant File and Objects events:     (1) Create (Success/Failure)     (2) Access (Success/Failure)     (3) Delete (Success/Failure)     (4) Modify (Success/Failure)     (5) Permission Modification (Success/Failure)     (6) Ownership Modification (Success/Failure) 3. Export/Writes/downloads to devices/digital media (e.g., CD/DVD, USB, SD) (Success/Failure) 4. Import/Uploads from devices/digital media (e.g., CD/DVD, USB, SD) (Success/Failure) 5. User and Group Management events:     (1) User add, delete, modify, disable, lock (Success/Failure)     (2) Group/Role add, delete, modify (Success/Failure) 6. Use of Privileged/Special Rights events:     (1) Security or audit policy changes (Success/Failure)     (2) Configuration changes (Success/Failure) 7. Admin or root-level access (Success/Failure) 8. Privilege/Role escalation (Success/Failure) 9. Audit and security relevant log data accesses (Success/Failure) 10. System reboot, restart and shutdown (Success/Failure) 11. Print to a device (Success/Failure) 12. Print to a file (e.g., pdf format) (Success/Failure) 13. Application (e.g., Adobe, Firefox, MS Office Suite) initialization   Are there templated Splunk search commands for these? And if so, could you point me to them? Many thanks!
I don't have admin rights in Splunk. Is there an easy way to enforces this in the search query? 
issues had been resolved. I did the props.conf and transforms.conf on the search heads alone, it didn't work. I also both props.conf and transforms.conf on the heavyforwarder, then it works. thank... See more...
issues had been resolved. I did the props.conf and transforms.conf on the search heads alone, it didn't work. I also both props.conf and transforms.conf on the heavyforwarder, then it works. thank you for your helps!!!