All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This worked for me! The other 2 above did not for some reason. Thanks
Hi all, We want to test if a cluster bundle on cluster manager needs to restart the cluster peers using the REST API.  In the first step we run a POST against: https://CLM:8089/services/cluster/ma... See more...
Hi all, We want to test if a cluster bundle on cluster manager needs to restart the cluster peers using the REST API.  In the first step we run a POST against: https://CLM:8089/services/cluster/manager/control/default/validate_bundle?output_mode=json check-restart=true in body and check for json.entry[0].content.checksum to get the checksum of the new bundle. If there is no checksum, there is no new bundle. Second we check the checksum against GET: https://CLM:8089/services/cluster/manager/info?output_mode=json json.entry[0].content.last_validated_bundle.checksum json.entry[0].content.last_dry_run_bundle.checksum to verify if the bundle check and test of the restart is completed and consider json.entry[0].content.last_check_restart_bundle_result if the restart is nessary or not. Unfurtunatly we see that the value of  json.entry[0].content.last_check_restart_bundle_result changes, even if last_dry_run_bundle.checksum and last_dry_run_bundle.checksum are set to the correct values.   to make a long story short we see that the red value is changing, while green is not changing. which is unexprected for us. tested against v9.2.5 and v9.4.1. At the moment is looks like a timing issue for me and i want to avoid sleep() code.    Is there a more solid way to check if restart is necessary or not?  best regards,   Andreas  
Hey, Email was release today from Splunk Cloud Platform Team stating  to fix this issue we should patch up to 9.4.0, 9.3.2, 9.2.4 or 9.1.7 as you have mentioned above. Last month in the "Splunk Sec... See more...
Hey, Email was release today from Splunk Cloud Platform Team stating  to fix this issue we should patch up to 9.4.0, 9.3.2, 9.2.4 or 9.1.7 as you have mentioned above. Last month in the "Splunk Security Advisories" it said to patch up to 9.4.1, 9.3.3, 9.2.5, and 9.1.8 so if we are on the 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions, we are in the fix? Second question,  If Splunk issued the recommendation to patch up to a higher level patch, why would they come back and recommend patch to a lower version with security vulnerabilities instead of patching up?
There is absolutely no problem with running rsyslog on the same box as splunk provided that you're not trying to bind the same port(s) to both programs. Have you configured rsyslog to receive networ... See more...
There is absolutely no problem with running rsyslog on the same box as splunk provided that you're not trying to bind the same port(s) to both programs. Have you configured rsyslog to receive network data on proper ports? Did you verify it is listening? Did you check with tcpdump/wireshark whether UCS is sending data?
You need to create user-seed.conf https://docs.splunk.com/Documentation/Splunk/latest/Admin/User-seedconf
Manipulating structured data on ingest is best done by external means (particular method would depend on the use case). In some limited cases you probably could do some regex magic to remove some par... See more...
Manipulating structured data on ingest is best done by external means (particular method would depend on the use case). In some limited cases you probably could do some regex magic to remove some parts of incoming events but this solution would be highly sensitive to data format. Oh, and I'm assuming you want to just remove key4, not dynamically decide which one to drop.
Thanks for your post.  Click on the gear on the right and select Host name. This change appeared after an upgrade and I assumed something was broken with my forwarders.  So dumb to remove the most... See more...
Thanks for your post.  Click on the gear on the right and select Host name. This change appeared after an upgrade and I assumed something was broken with my forwarders.  So dumb to remove the most important field from the table.
We want to use splunk-library-javalogging to send logs via Log4j  to Splunk Service Environment:  Spark with log4j2 in Azure Databricks ----> Splunk Enterprise The config file log4j2.xml  <?xml v... See more...
We want to use splunk-library-javalogging to send logs via Log4j  to Splunk Service Environment:  Spark with log4j2 in Azure Databricks ----> Splunk Enterprise The config file log4j2.xml  <?xml version="1.0" encoding="UTF-8"?><Configuration status="INFO" packages="com.splunk.logging,com.databricks.logging.log4j" shutdownHook="disable"> <Appenders> ... <SplunkHttp name="http-input" url="https://url-service" token="xxxx-xxxx-xxxx-xxx-xxx--xxxx" host="" index="my-index" source="spark-work" sourcetype="httpevent" messageFormat="text" middleware="HttpEventCollectorUnitTestMiddleware" connect_timeout="5000" termination_timeout="1000" disableCertificateValidation="true"> <PatternLayout pattern="%m%n"/> </SplunkHttp> </Appenders> <Loggers> <Root level="INFO"> ... </Root> ... <Logger name="splunk.log4j" level="DEBUG"> <AppenderRef ref="http-input"/> </Logger> </Loggers> </Configuration>   We use the library splunk-library-javalogging: splunk-library-javalogging-1.11.8.jar with okhttp-4.11.0.jar okio-3.5.0.jar okio-jvm-3.5.0.jar Currently we based the configuration from this example: https://github.com/splunk/splunk-library-javalogging/blob/main/src/test/resources/log4j2.xml Currently it doesn't work. We checked HEC via curl send a message from  Databricks to Splunk HEC and receive without problem. Does anyone have any experience or can help us with some guidance or advice? Thanks
Not sure this is even possible, but I'll ask anyway... I have application(s) that are sending JSON data into Splunk, for example: { "key1": "value1", "key2": "value2", "key3": "value3", "key4":... See more...
Not sure this is even possible, but I'll ask anyway... I have application(s) that are sending JSON data into Splunk, for example: { "key1": "value1", "key2": "value2", "key3": "value3", "key4": "value1" } As you can see, the value for "key4" is the same as "key1". So, in my example, I don't want to ingest the complete JSON payload, but only: { "key1": "value1", "key2": "value2", "key3": "value3" }   Can this be done?
I am trying to learn Splunk Enterprise. I created the account, logged in, no problem. I downloaded the demo data and did some stuff with that. This is over a few days. Finally today, I was not able t... See more...
I am trying to learn Splunk Enterprise. I created the account, logged in, no problem. I downloaded the demo data and did some stuff with that. This is over a few days. Finally today, I was not able to login. It said my login info was incorrect.  I figured I had messed the password somehow, and reset it by going to Command Line and using command "del /f /q "C:\Program Files\Splunk\etc\passwd" Now on the Splunk Enterprise Page there is a note saying 'No users exist. Please set up a user'.  How? And have I lost the demo date?
Hi, I have a small lab (air gapped) with about 2 Linux servers  not including the Splunk server and 25 Windows machine.   I have deployed Splunk and ingesting logs from all Linux and Windows client... See more...
Hi, I have a small lab (air gapped) with about 2 Linux servers  not including the Splunk server and 25 Windows machine.   I have deployed Splunk and ingesting logs from all Linux and Windows clients and also from network switch, VMWare server and hosts.   I am able to send logs from network switch and VMWare hosts directly into Splunk using using "Data Inputs->TCP" and by picking different ports for each service but for Cisco UCS Chassis, to send logs, I can't configure other than syslog server name and log level.  So I setup a rsyslog server on the same machine as Splunk Enterprise. It seems to be running but I don't logs from Cisco UCS. I have check firewall rules as well and all seems to be configured properly.  Any tips about running rsyslog and Splunk server on the same machine and about sending Cisco UCS logs to rsyslog/splunk would be appreciated.  Unfortunately, I can't provide much info as this is an air gapped lab. 
Thank you @livehybrid  for the above solution. However, Now when I have added the winrm in app directory and deployed on Splunk Cloud, I am getting a new issue  ImportError: urllib3 v2 only sup... See more...
Thank you @livehybrid  for the above solution. However, Now when I have added the winrm in app directory and deployed on Splunk Cloud, I am getting a new issue  ImportError: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'OpenSSL 1.0.2zk-fips 3 Sep 2024'. See: https://github.com/urllib3/urllib3/issues/2168] And below is how i am importing the winrm import argparse import os import sys lib_dir = os.path.abspath(os.path.join(os.path.dirname(__file__),'..','lib')) sys.path.insert(0,lib_dir) import winrm def clean_old_files(TargetServer, FolderPath, FileThresholdInMinutes, UserName, Password): Please guide me how i can overcome this issue.  
In the ingestion pipeline Splunk has no notion of most of the fields since they're usually mostly search-time extracted so you can either bend over backwards trying to create a regex which will acoun... See more...
In the ingestion pipeline Splunk has no notion of most of the fields since they're usually mostly search-time extracted so you can either bend over backwards trying to create a regex which will acount for all possible situations (which might get tricky if the order of fields isn't fixed or if their presence isn't mandatory) or you can do anotner dirty trick - extract fields in index time, do a INGEST_EVAL based on their value and then remove them (assign null to them) so they're not getting indexed.
You should try to create suitable REGEX to match only those events. One other comment, never put comment # mark in middle of conf line! At least in some conf file splunk cannot recognize it as a comm... See more...
You should try to create suitable REGEX to match only those events. One other comment, never put comment # mark in middle of conf line! At least in some conf file splunk cannot recognize it as a comment and try to do something with it with unknown results.
Hoe you have defined line breaking?
@David_M  Good to know that adding the outputcsv command resolved the issue.
Thanks for the reply! The rule implementation instructions say to run this search every 30 days, but it doesn't say how long I should search. I've already tested searching the last 90 minutes, but it... See more...
Thanks for the reply! The rule implementation instructions say to run this search every 30 days, but it doesn't say how long I should search. I've already tested searching the last 90 minutes, but it didn't cover even close to what I needed. I tried this time because this search is very expensive and takes a long time. Do you have any suggestions for scheduling time and search period? Another point is that I tried, but I didn't find any way to differentiate service accounts from user accounts based on fields.
I'm now using version 4.4.1 of the Add-On Builder, and that seems to have helped slightly. I'm still getting the same failure, but at least now it says I'm using version 2.0.1 of the python SDK. It r... See more...
I'm now using version 4.4.1 of the Add-On Builder, and that seems to have helped slightly. I'm still getting the same failure, but at least now it says I'm using version 2.0.1 of the python SDK. It requires version 2.0.2 though. I'm still completely lost as to how to satisfy the validator. Is anyone able to make it past this one? Here's the error output: [{"result": "failure", "message": "Detected an outdated version of the Splunk SDK for Python (2.0.1). Upgrade to 2.0.2 or later. File: bin/splunklib/__init__.py Line Number: 33", "message_filename": "bin/splunklib/__init__.py", "message_line": 33}, {"result": "warning", "message": "Splunk SDK for Python detected (version 2.1.0). No action required at this time. File: bin/ta_ignio_integration_add_on/aob_py3/splunklib/__init__.py Line Number: 33", "message_filename": "bin/ta_ignio_integration_add_on/aob_py3/splunklib/__init__.py", "message_line": 33}]   I'm also getting this validation error: [{"result": "failure", "message": "Found AArch64-incompatible binary file. Remove or rebuild the file to be AArch64-compatible. File: bin/ta_ignio_integration_add_on/aob_py3/yaml/_yaml.cpython-37m-x86_64-linux-gnu.so File: bin/ta_ignio_integration_add_on/aob_py3/yaml/_yaml.cpython-37m-x86_64-linux-gnu.so", "message_filename": "bin/ta_ignio_integration_add_on/aob_py3/yaml/_yaml.cpython-37m-x86_64-linux-gnu.so", "message_line": null}, {"result": "failure", "message": "Found AArch64-incompatible binary file. Remove or rebuild the file to be AArch64-compatible. File: bin/ta_ignio_integration_add_on/aob_py3/markupsafe/_speedups.cpython-37m-x86_64-linux-gnu.so File: bin/ta_ignio_integration_add_on/aob_py3/markupsafe/_speedups.cpython-37m-x86_64-linux-gnu.so", "message_filename": "bin/ta_ignio_integration_add_on/aob_py3/markupsafe/_speedups.cpython-37m-x86_64-linux-gnu.so", "message_line": null}]   Unfortunately, the Add-On Builder doesn't mention how to actually fix either issue, so I'm stuck for now. You mentioned using -t to target the pip install, but I'm not sure where I would be targeting. 
I came across an identical thread at Re: How does Splunk calculate linecount? - Splunk Community
@livehybrid  yes i am trying it outside dashboard in search bar i not getting any error or result as well