All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You need to create user-seed.conf https://docs.splunk.com/Documentation/Splunk/latest/Admin/User-seedconf
Manipulating structured data on ingest is best done by external means (particular method would depend on the use case). In some limited cases you probably could do some regex magic to remove some par... See more...
Manipulating structured data on ingest is best done by external means (particular method would depend on the use case). In some limited cases you probably could do some regex magic to remove some parts of incoming events but this solution would be highly sensitive to data format. Oh, and I'm assuming you want to just remove key4, not dynamically decide which one to drop.
Thanks for your post.  Click on the gear on the right and select Host name. This change appeared after an upgrade and I assumed something was broken with my forwarders.  So dumb to remove the most... See more...
Thanks for your post.  Click on the gear on the right and select Host name. This change appeared after an upgrade and I assumed something was broken with my forwarders.  So dumb to remove the most important field from the table.
We want to use splunk-library-javalogging to send logs via Log4j  to Splunk Service Environment:  Spark with log4j2 in Azure Databricks ----> Splunk Enterprise The config file log4j2.xml  <?xml v... See more...
We want to use splunk-library-javalogging to send logs via Log4j  to Splunk Service Environment:  Spark with log4j2 in Azure Databricks ----> Splunk Enterprise The config file log4j2.xml  <?xml version="1.0" encoding="UTF-8"?><Configuration status="INFO" packages="com.splunk.logging,com.databricks.logging.log4j" shutdownHook="disable"> <Appenders> ... <SplunkHttp name="http-input" url="https://url-service" token="xxxx-xxxx-xxxx-xxx-xxx--xxxx" host="" index="my-index" source="spark-work" sourcetype="httpevent" messageFormat="text" middleware="HttpEventCollectorUnitTestMiddleware" connect_timeout="5000" termination_timeout="1000" disableCertificateValidation="true"> <PatternLayout pattern="%m%n"/> </SplunkHttp> </Appenders> <Loggers> <Root level="INFO"> ... </Root> ... <Logger name="splunk.log4j" level="DEBUG"> <AppenderRef ref="http-input"/> </Logger> </Loggers> </Configuration>   We use the library splunk-library-javalogging: splunk-library-javalogging-1.11.8.jar with okhttp-4.11.0.jar okio-3.5.0.jar okio-jvm-3.5.0.jar Currently we based the configuration from this example: https://github.com/splunk/splunk-library-javalogging/blob/main/src/test/resources/log4j2.xml Currently it doesn't work. We checked HEC via curl send a message from  Databricks to Splunk HEC and receive without problem. Does anyone have any experience or can help us with some guidance or advice? Thanks
Not sure this is even possible, but I'll ask anyway... I have application(s) that are sending JSON data into Splunk, for example: { "key1": "value1", "key2": "value2", "key3": "value3", "key4":... See more...
Not sure this is even possible, but I'll ask anyway... I have application(s) that are sending JSON data into Splunk, for example: { "key1": "value1", "key2": "value2", "key3": "value3", "key4": "value1" } As you can see, the value for "key4" is the same as "key1". So, in my example, I don't want to ingest the complete JSON payload, but only: { "key1": "value1", "key2": "value2", "key3": "value3" }   Can this be done?
I am trying to learn Splunk Enterprise. I created the account, logged in, no problem. I downloaded the demo data and did some stuff with that. This is over a few days. Finally today, I was not able t... See more...
I am trying to learn Splunk Enterprise. I created the account, logged in, no problem. I downloaded the demo data and did some stuff with that. This is over a few days. Finally today, I was not able to login. It said my login info was incorrect.  I figured I had messed the password somehow, and reset it by going to Command Line and using command "del /f /q "C:\Program Files\Splunk\etc\passwd" Now on the Splunk Enterprise Page there is a note saying 'No users exist. Please set up a user'.  How? And have I lost the demo date?
Hi, I have a small lab (air gapped) with about 2 Linux servers  not including the Splunk server and 25 Windows machine.   I have deployed Splunk and ingesting logs from all Linux and Windows client... See more...
Hi, I have a small lab (air gapped) with about 2 Linux servers  not including the Splunk server and 25 Windows machine.   I have deployed Splunk and ingesting logs from all Linux and Windows clients and also from network switch, VMWare server and hosts.   I am able to send logs from network switch and VMWare hosts directly into Splunk using using "Data Inputs->TCP" and by picking different ports for each service but for Cisco UCS Chassis, to send logs, I can't configure other than syslog server name and log level.  So I setup a rsyslog server on the same machine as Splunk Enterprise. It seems to be running but I don't logs from Cisco UCS. I have check firewall rules as well and all seems to be configured properly.  Any tips about running rsyslog and Splunk server on the same machine and about sending Cisco UCS logs to rsyslog/splunk would be appreciated.  Unfortunately, I can't provide much info as this is an air gapped lab. 
Thank you @livehybrid  for the above solution. However, Now when I have added the winrm in app directory and deployed on Splunk Cloud, I am getting a new issue  ImportError: urllib3 v2 only sup... See more...
Thank you @livehybrid  for the above solution. However, Now when I have added the winrm in app directory and deployed on Splunk Cloud, I am getting a new issue  ImportError: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'OpenSSL 1.0.2zk-fips 3 Sep 2024'. See: https://github.com/urllib3/urllib3/issues/2168] And below is how i am importing the winrm import argparse import os import sys lib_dir = os.path.abspath(os.path.join(os.path.dirname(__file__),'..','lib')) sys.path.insert(0,lib_dir) import winrm def clean_old_files(TargetServer, FolderPath, FileThresholdInMinutes, UserName, Password): Please guide me how i can overcome this issue.  
In the ingestion pipeline Splunk has no notion of most of the fields since they're usually mostly search-time extracted so you can either bend over backwards trying to create a regex which will acoun... See more...
In the ingestion pipeline Splunk has no notion of most of the fields since they're usually mostly search-time extracted so you can either bend over backwards trying to create a regex which will acount for all possible situations (which might get tricky if the order of fields isn't fixed or if their presence isn't mandatory) or you can do anotner dirty trick - extract fields in index time, do a INGEST_EVAL based on their value and then remove them (assign null to them) so they're not getting indexed.
You should try to create suitable REGEX to match only those events. One other comment, never put comment # mark in middle of conf line! At least in some conf file splunk cannot recognize it as a comm... See more...
You should try to create suitable REGEX to match only those events. One other comment, never put comment # mark in middle of conf line! At least in some conf file splunk cannot recognize it as a comment and try to do something with it with unknown results.
Hoe you have defined line breaking?
@David_M  Good to know that adding the outputcsv command resolved the issue.
Thanks for the reply! The rule implementation instructions say to run this search every 30 days, but it doesn't say how long I should search. I've already tested searching the last 90 minutes, but it... See more...
Thanks for the reply! The rule implementation instructions say to run this search every 30 days, but it doesn't say how long I should search. I've already tested searching the last 90 minutes, but it didn't cover even close to what I needed. I tried this time because this search is very expensive and takes a long time. Do you have any suggestions for scheduling time and search period? Another point is that I tried, but I didn't find any way to differentiate service accounts from user accounts based on fields.
I'm now using version 4.4.1 of the Add-On Builder, and that seems to have helped slightly. I'm still getting the same failure, but at least now it says I'm using version 2.0.1 of the python SDK. It r... See more...
I'm now using version 4.4.1 of the Add-On Builder, and that seems to have helped slightly. I'm still getting the same failure, but at least now it says I'm using version 2.0.1 of the python SDK. It requires version 2.0.2 though. I'm still completely lost as to how to satisfy the validator. Is anyone able to make it past this one? Here's the error output: [{"result": "failure", "message": "Detected an outdated version of the Splunk SDK for Python (2.0.1). Upgrade to 2.0.2 or later. File: bin/splunklib/__init__.py Line Number: 33", "message_filename": "bin/splunklib/__init__.py", "message_line": 33}, {"result": "warning", "message": "Splunk SDK for Python detected (version 2.1.0). No action required at this time. File: bin/ta_ignio_integration_add_on/aob_py3/splunklib/__init__.py Line Number: 33", "message_filename": "bin/ta_ignio_integration_add_on/aob_py3/splunklib/__init__.py", "message_line": 33}]   I'm also getting this validation error: [{"result": "failure", "message": "Found AArch64-incompatible binary file. Remove or rebuild the file to be AArch64-compatible. File: bin/ta_ignio_integration_add_on/aob_py3/yaml/_yaml.cpython-37m-x86_64-linux-gnu.so File: bin/ta_ignio_integration_add_on/aob_py3/yaml/_yaml.cpython-37m-x86_64-linux-gnu.so", "message_filename": "bin/ta_ignio_integration_add_on/aob_py3/yaml/_yaml.cpython-37m-x86_64-linux-gnu.so", "message_line": null}, {"result": "failure", "message": "Found AArch64-incompatible binary file. Remove or rebuild the file to be AArch64-compatible. File: bin/ta_ignio_integration_add_on/aob_py3/markupsafe/_speedups.cpython-37m-x86_64-linux-gnu.so File: bin/ta_ignio_integration_add_on/aob_py3/markupsafe/_speedups.cpython-37m-x86_64-linux-gnu.so", "message_filename": "bin/ta_ignio_integration_add_on/aob_py3/markupsafe/_speedups.cpython-37m-x86_64-linux-gnu.so", "message_line": null}]   Unfortunately, the Add-On Builder doesn't mention how to actually fix either issue, so I'm stuck for now. You mentioned using -t to target the pip install, but I'm not sure where I would be targeting. 
I came across an identical thread at Re: How does Splunk calculate linecount? - Splunk Community
@livehybrid  yes i am trying it outside dashboard in search bar i not getting any error or result as well  
Hi Kiran, Yea adding the outputcsv command fixed the issue.   Thanks! David
Hi @David_M  Did you use outputcsv, or some other method for exporting the csv such as using the "Output results to lookup" alert action? As previously mentioned - the output path for outputcsv is ... See more...
Hi @David_M  Did you use outputcsv, or some other method for exporting the csv such as using the "Output results to lookup" alert action? As previously mentioned - the output path for outputcsv is $SPLUNK_HOME/var/run/splunk/csv - however these files are not replicated across the cluster if you are running a SHC.  If you're using the outputcsv, can you confirm you arent using dispatch=true ? If you are you then your job will be in $SPLUNK_HOME/var/run/splunk/dispatch/<job id>/csv  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hello Had an additional questions, if I have to extract datas from two fields For example, I will NOT send to distant splunk, all the datas that in the dest_zone and src_zone contains DMZ-NETWORK... See more...
Hello Had an additional questions, if I have to extract datas from two fields For example, I will NOT send to distant splunk, all the datas that in the dest_zone and src_zone contains DMZ-NETWORK and GUEST-NETWORK My config is something like that TRANSFORMS.CONF [clone_only_dmz-network] REGEX = ^((?!dmz-network).)*$ CLONE_SOURCETYPE = cloned:firewall_clone DEST_KEY = _SYSLOG_ROUTING #it's syslog logs FORMAT = distant_splunk PROPS.CONF [firewall_sourcetype] TRANSFORMS-clone = clone_only_dmz-network OUTPUTS.CONF [syslog:distant_splunk] server = ip of the distant HF This is actually working, I tested it But, if I want to not send mutliple fileds, how can I achieve that ?
Thanks much, this seems to be a direct point to our administrators. Can you comment on the problems reported by isoutamo below?