All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ganesanvc  Looking at the square braces there, it looks like you're running the sub-search part in the SPL search box, try removing the [ and ] so that we can see if that works independetly. ... See more...
Hi @ganesanvc  Looking at the square braces there, it looks like you're running the sub-search part in the SPL search box, try removing the [ and ] so that we can see if that works independetly.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @addOnGuy  I think the target would be <yourApp>/bin/ta_ignio_integration_add_on/ If you look in that folder - is the previous version of splunk-sdk in there? pip install --upgrade splunk-sdk -... See more...
Hi @addOnGuy  I think the target would be <yourApp>/bin/ta_ignio_integration_add_on/ If you look in that folder - is the previous version of splunk-sdk in there? pip install --upgrade splunk-sdk --target <yourAppLocation>/bin/ta_ignio_integration_add_on/    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @JoaoGuiNovaes  I think every 30 days is way too infrequent for this - You would want the service accounts adding fairly soon after they're first seen so the info can be used in other searches. ... See more...
Hi @JoaoGuiNovaes  I think every 30 days is way too infrequent for this - You would want the service accounts adding fairly soon after they're first seen so the info can be used in other searches. Personally I would run it more frequently, e.g. hourly, or every 4 hours. I usually look back (earliest) equiv to the time since the previous run minus an extra 10 mins to account for lag, so something like earliest=-70m latest=-10m (60 minute period, running every hour).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @danielbb  It could be something like a field extraction happening after the line breaking which is causing this, or something else. Without access to your instance we could do with seeing some s... See more...
Hi @danielbb  It could be something like a field extraction happening after the line breaking which is causing this, or something else. Without access to your instance we could do with seeing some sample logs along with a btool output ($SPLUNK_HOME/bin/splunk btool props list <sourceTypeName>) for your event's sourcetype.  The thread you posted from 2013 looks like could have been related to the events having a line-break in. Please let us know if you're able to provide a sample + props output.  Thanks
Hi @Prajwal_Kasar  This means that urllib3 v2.x is not compatible with the version of OpenSSL (1.0.2) installed in your Splunk Cloud Python environment. Even though you may have bundled your own lib... See more...
Hi @Prajwal_Kasar  This means that urllib3 v2.x is not compatible with the version of OpenSSL (1.0.2) installed in your Splunk Cloud Python environment. Even though you may have bundled your own libraries, you can't change the underlying OpenSSL on Splunk Cloud.  urllib3 v2.0+ dropped support for OpenSSL < 1.1.1 however many environments (including Splunk Cloud's Python and underlying OS) still use OpenSSL 1.0.2. To fix this you need to Pin urllib3 to v1.x I would try and install a specific urllib3 package 1.26.18 into your lib/deps folder along with winrm, as 1.26.18 supports OpenSSL 1.0.2.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
This worked for me! The other 2 above did not for some reason. Thanks
Hi all, We want to test if a cluster bundle on cluster manager needs to restart the cluster peers using the REST API.  In the first step we run a POST against: https://CLM:8089/services/cluster/ma... See more...
Hi all, We want to test if a cluster bundle on cluster manager needs to restart the cluster peers using the REST API.  In the first step we run a POST against: https://CLM:8089/services/cluster/manager/control/default/validate_bundle?output_mode=json check-restart=true in body and check for json.entry[0].content.checksum to get the checksum of the new bundle. If there is no checksum, there is no new bundle. Second we check the checksum against GET: https://CLM:8089/services/cluster/manager/info?output_mode=json json.entry[0].content.last_validated_bundle.checksum json.entry[0].content.last_dry_run_bundle.checksum to verify if the bundle check and test of the restart is completed and consider json.entry[0].content.last_check_restart_bundle_result if the restart is nessary or not. Unfurtunatly we see that the value of  json.entry[0].content.last_check_restart_bundle_result changes, even if last_dry_run_bundle.checksum and last_dry_run_bundle.checksum are set to the correct values.   to make a long story short we see that the red value is changing, while green is not changing. which is unexprected for us. tested against v9.2.5 and v9.4.1. At the moment is looks like a timing issue for me and i want to avoid sleep() code.    Is there a more solid way to check if restart is necessary or not?  best regards,   Andreas  
Hey, Email was release today from Splunk Cloud Platform Team stating  to fix this issue we should patch up to 9.4.0, 9.3.2, 9.2.4 or 9.1.7 as you have mentioned above. Last month in the "Splunk Sec... See more...
Hey, Email was release today from Splunk Cloud Platform Team stating  to fix this issue we should patch up to 9.4.0, 9.3.2, 9.2.4 or 9.1.7 as you have mentioned above. Last month in the "Splunk Security Advisories" it said to patch up to 9.4.1, 9.3.3, 9.2.5, and 9.1.8 so if we are on the 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions, we are in the fix? Second question,  If Splunk issued the recommendation to patch up to a higher level patch, why would they come back and recommend patch to a lower version with security vulnerabilities instead of patching up?
There is absolutely no problem with running rsyslog on the same box as splunk provided that you're not trying to bind the same port(s) to both programs. Have you configured rsyslog to receive networ... See more...
There is absolutely no problem with running rsyslog on the same box as splunk provided that you're not trying to bind the same port(s) to both programs. Have you configured rsyslog to receive network data on proper ports? Did you verify it is listening? Did you check with tcpdump/wireshark whether UCS is sending data?
You need to create user-seed.conf https://docs.splunk.com/Documentation/Splunk/latest/Admin/User-seedconf
Manipulating structured data on ingest is best done by external means (particular method would depend on the use case). In some limited cases you probably could do some regex magic to remove some par... See more...
Manipulating structured data on ingest is best done by external means (particular method would depend on the use case). In some limited cases you probably could do some regex magic to remove some parts of incoming events but this solution would be highly sensitive to data format. Oh, and I'm assuming you want to just remove key4, not dynamically decide which one to drop.
Thanks for your post.  Click on the gear on the right and select Host name. This change appeared after an upgrade and I assumed something was broken with my forwarders.  So dumb to remove the most... See more...
Thanks for your post.  Click on the gear on the right and select Host name. This change appeared after an upgrade and I assumed something was broken with my forwarders.  So dumb to remove the most important field from the table.
We want to use splunk-library-javalogging to send logs via Log4j  to Splunk Service Environment:  Spark with log4j2 in Azure Databricks ----> Splunk Enterprise The config file log4j2.xml  <?xml v... See more...
We want to use splunk-library-javalogging to send logs via Log4j  to Splunk Service Environment:  Spark with log4j2 in Azure Databricks ----> Splunk Enterprise The config file log4j2.xml  <?xml version="1.0" encoding="UTF-8"?><Configuration status="INFO" packages="com.splunk.logging,com.databricks.logging.log4j" shutdownHook="disable"> <Appenders> ... <SplunkHttp name="http-input" url="https://url-service" token="xxxx-xxxx-xxxx-xxx-xxx--xxxx" host="" index="my-index" source="spark-work" sourcetype="httpevent" messageFormat="text" middleware="HttpEventCollectorUnitTestMiddleware" connect_timeout="5000" termination_timeout="1000" disableCertificateValidation="true"> <PatternLayout pattern="%m%n"/> </SplunkHttp> </Appenders> <Loggers> <Root level="INFO"> ... </Root> ... <Logger name="splunk.log4j" level="DEBUG"> <AppenderRef ref="http-input"/> </Logger> </Loggers> </Configuration>   We use the library splunk-library-javalogging: splunk-library-javalogging-1.11.8.jar with okhttp-4.11.0.jar okio-3.5.0.jar okio-jvm-3.5.0.jar Currently we based the configuration from this example: https://github.com/splunk/splunk-library-javalogging/blob/main/src/test/resources/log4j2.xml Currently it doesn't work. We checked HEC via curl send a message from  Databricks to Splunk HEC and receive without problem. Does anyone have any experience or can help us with some guidance or advice? Thanks
Not sure this is even possible, but I'll ask anyway... I have application(s) that are sending JSON data into Splunk, for example: { "key1": "value1", "key2": "value2", "key3": "value3", "key4":... See more...
Not sure this is even possible, but I'll ask anyway... I have application(s) that are sending JSON data into Splunk, for example: { "key1": "value1", "key2": "value2", "key3": "value3", "key4": "value1" } As you can see, the value for "key4" is the same as "key1". So, in my example, I don't want to ingest the complete JSON payload, but only: { "key1": "value1", "key2": "value2", "key3": "value3" }   Can this be done?
I am trying to learn Splunk Enterprise. I created the account, logged in, no problem. I downloaded the demo data and did some stuff with that. This is over a few days. Finally today, I was not able t... See more...
I am trying to learn Splunk Enterprise. I created the account, logged in, no problem. I downloaded the demo data and did some stuff with that. This is over a few days. Finally today, I was not able to login. It said my login info was incorrect.  I figured I had messed the password somehow, and reset it by going to Command Line and using command "del /f /q "C:\Program Files\Splunk\etc\passwd" Now on the Splunk Enterprise Page there is a note saying 'No users exist. Please set up a user'.  How? And have I lost the demo date?
Hi, I have a small lab (air gapped) with about 2 Linux servers  not including the Splunk server and 25 Windows machine.   I have deployed Splunk and ingesting logs from all Linux and Windows client... See more...
Hi, I have a small lab (air gapped) with about 2 Linux servers  not including the Splunk server and 25 Windows machine.   I have deployed Splunk and ingesting logs from all Linux and Windows clients and also from network switch, VMWare server and hosts.   I am able to send logs from network switch and VMWare hosts directly into Splunk using using "Data Inputs->TCP" and by picking different ports for each service but for Cisco UCS Chassis, to send logs, I can't configure other than syslog server name and log level.  So I setup a rsyslog server on the same machine as Splunk Enterprise. It seems to be running but I don't logs from Cisco UCS. I have check firewall rules as well and all seems to be configured properly.  Any tips about running rsyslog and Splunk server on the same machine and about sending Cisco UCS logs to rsyslog/splunk would be appreciated.  Unfortunately, I can't provide much info as this is an air gapped lab. 
Thank you @livehybrid  for the above solution. However, Now when I have added the winrm in app directory and deployed on Splunk Cloud, I am getting a new issue  ImportError: urllib3 v2 only sup... See more...
Thank you @livehybrid  for the above solution. However, Now when I have added the winrm in app directory and deployed on Splunk Cloud, I am getting a new issue  ImportError: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'OpenSSL 1.0.2zk-fips 3 Sep 2024'. See: https://github.com/urllib3/urllib3/issues/2168] And below is how i am importing the winrm import argparse import os import sys lib_dir = os.path.abspath(os.path.join(os.path.dirname(__file__),'..','lib')) sys.path.insert(0,lib_dir) import winrm def clean_old_files(TargetServer, FolderPath, FileThresholdInMinutes, UserName, Password): Please guide me how i can overcome this issue.  
In the ingestion pipeline Splunk has no notion of most of the fields since they're usually mostly search-time extracted so you can either bend over backwards trying to create a regex which will acoun... See more...
In the ingestion pipeline Splunk has no notion of most of the fields since they're usually mostly search-time extracted so you can either bend over backwards trying to create a regex which will acount for all possible situations (which might get tricky if the order of fields isn't fixed or if their presence isn't mandatory) or you can do anotner dirty trick - extract fields in index time, do a INGEST_EVAL based on their value and then remove them (assign null to them) so they're not getting indexed.
You should try to create suitable REGEX to match only those events. One other comment, never put comment # mark in middle of conf line! At least in some conf file splunk cannot recognize it as a comm... See more...
You should try to create suitable REGEX to match only those events. One other comment, never put comment # mark in middle of conf line! At least in some conf file splunk cannot recognize it as a comment and try to do something with it with unknown results.
Hoe you have defined line breaking?