All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

it is totally unnecessary to install a UF on a SH ->Requirements are determined by policies, so if policy says that it is required to forward all Splunk components to central Splunk for monitoring, t... See more...
it is totally unnecessary to install a UF on a SH ->Requirements are determined by policies, so if policy says that it is required to forward all Splunk components to central Splunk for monitoring, then it is necessary.   We have a use-case that also requires us to install Splunk UF in all the components: Indexers, Search Heads, Deployment servers. I believe forwarders itself can dual-pipe, however whether it can choose certain index to pipe, I am not very sure. e.g  Index 1,2,3 only  -pipe to central Splunk All indexes - pipe to local Splunk    
selected the date from 1 jan 2024 to 2 jan 2024 ---------------------------------------------------------------- index="bsds_gans" earliest=1704096000 latest=+1d pfor IN (*) test IN (*) name IN (*)... See more...
selected the date from 1 jan 2024 to 2 jan 2024 ---------------------------------------------------------------- index="bsds_gans" earliest=1704096000 latest=+1d pfor IN (*) test IN (*) name IN (*) ckb IN (*) vrsion IN (*) id IN (*) location IN (*) group IN (*) | eval pfor=upper(pfor) | eval _time = start_time | eval WW=strftime(_time, "%V.%w") | eval name=mvindex(split(context,"."),1) | search name !="*Case Setup*" | eval name=mvindex(split(name,".PSPV"),0) | eval id=mvindex(split(name," - "),0) | search id IN (*) | eval main=mvindex(split(name," - "),1) | search main IN (*) | stats count(eval(sta="FIL")) as fail_count, count(eval(sta="PASS")) as pass_count, count(eval(like(sta,"LOCKED%"))) as not_run_count by name,id -------------------------------------------------------------------- selected date is from 1jan 2024 to 13 jan 2024 index="bsds_gans" earliest=1704096000 latest=+1d pfor IN (*) test IN (*) name IN (*) ckb IN (*) vrsion IN (*) id IN (*) location IN (*) group IN (*) | eval pfor=upper(pfor) | eval _time = start_time | eval WW=strftime(_time, "%V.%w") | eval name=mvindex(split(context,"."),1) | search name !="*Case Setup*" | eval name=mvindex(split(name,".PSPV"),0) | eval id=mvindex(split(name," - "),0) | search id IN (*) | eval main=mvindex(split(name," - "),1) | search main IN (*) | stats count(eval(sta="FIL")) as fail_count, count(eval(sta="PASS")) as pass_count, count(eval(like(sta,"LOCKED%"))) as not_run_count by name,id ----------------------------------------------------------------------- selected  for last 7 days index="bsds_gans" earliest=-7d@h latest=+1d pfor IN (*) test IN (*) name IN (*) ckb IN (*) version IN (*) id IN (*) location IN (*) group IN (*) | eval pfor=upper(pfor) | eval _time = start_time | eval WW=strftime(_time, "%V.%w") | eval name=mvindex(split(context,"."),1) | search name !="*Case Setup*" | eval name=mvindex(split(name,".PSPV"),0) | eval id=mvindex(split(name," - "),0) | search id IN (*) | eval main=mvindex(split(name," - "),1) | search main IN (*) | stats count(eval(sta="FIL")) as fail_count, count(eval(sta="PASS")) as pass_count, count(eval(like(sta,"LOCKED%"))) as not_run_count by name,id
Hi @CarolinaHB , While that's true, changing the server.conf in C:\Program Files\SplunkUniversalForwarder\etc\system\local\ will give you the desired results. It's a best practice to place the serve... See more...
Hi @CarolinaHB , While that's true, changing the server.conf in C:\Program Files\SplunkUniversalForwarder\etc\system\local\ will give you the desired results. It's a best practice to place the server.conf file in a separate app as @jtacy said. That would be in $SPLUNK_HOME/etc/apps/myapp/local/server.conf.   Recommended read on config files: https://docs.splunk.com/Documentation/Splunk/9.1.3/Admin/Wheretofindtheconfigurationfiles HTH
Further updates, the JS in the previous thread works so long as I leave the browser on that particular dashboard, tested it running for few days and no log outs occured. Anyway, hoping there could... See more...
Further updates, the JS in the previous thread works so long as I leave the browser on that particular dashboard, tested it running for few days and no log outs occured. Anyway, hoping there could be a more direct way to set the settings for the group or user though.
Hello ! Sorry I don't think I ever realized that in the new Answers, app developers don't actually get notified when there is a question about their apps.  So I only saw this question because @tsc... See more...
Hello ! Sorry I don't think I ever realized that in the new Answers, app developers don't actually get notified when there is a question about their apps.  So I only saw this question because @tscroggins    @'ed me directly.  (thanks by the way).   Going forward I have now "subscribed" to my own app so although that seems weird, perhaps it will help. The "pain" field is actually calculated from a macro in the app called "estimate_pain", and you are free to try out some modifications.    What ships is a somewhat complex thing that depends on total_run_time, the ratio of scan_count to event_count,    has_index_term,  has_pre_command,    various logic around which command is the first_transforming command, (strongly penalizing things like "table"),  also avg_pct_memory   max_mem_used. There are also some exceptions poked in the logic,  for instance if the first command is metadata or makeresults it kind of short circuits some of the logic.  likewise if the first_transforming command is "head" etc. The INTENTION is that high "pain" correlates strongly with the sort of searches that the Splunk deployment's admins would want to know about, so they could go educate or help that user do something less awful. I am super curious for what you see,  what your reaction is and suggestions are.  Answers is fine so we can talk on there.  Note however that on the landing page of the sideview_ui app it also exhorts you the user to email anyuthing and everything to sideview_ui@sideviewapps.com or to post your question on the app's channel on the Splunk community slack I hope that helps, and please send in any and all feedback, in any area and in any quantity.   Thanks.
Hello, @jtacy .  A question, Is the file being changed from the C:\Program Files\SplunkUniversalForwarder\etc\system\local\”?   Thank very much. Regards.
please check the truncated event from syslog server  We are attempting to send logs to both the Splunk indexer and the syslog server because different teams handle distinct log types. My team ma... See more...
please check the truncated event from syslog server  We are attempting to send logs to both the Splunk indexer and the syslog server because different teams handle distinct log types. My team manages the system security logs specifically for SOC team monitoring.
Thanks so much.
Field aliases are specific to a sourcetype.  To have an alias for a field in two sourcetypes requires two aliases.
A user wants to create a new field alias for a field that appears in two sourcetypes. How many field aliases need to be created?One or two It should be one.Answer says two.Explain
Hi @sarvananth, Have you reviewed rsyslog documentation for maximum message length and line endings? If you're forwarding using a syslog output over UDP, the transport itself has a limit of 65,535 b... See more...
Hi @sarvananth, Have you reviewed rsyslog documentation for maximum message length and line endings? If you're forwarding using a syslog output over UDP, the transport itself has a limit of 65,535 bytes per datagram (subtract headers for maximum payload length). You may also want to transform the events by replacing line endings with an escape sequence of your choosing (or one required by the consumer).
Hi Splunkers, The origin of the problem was corrupted buckets. In my case 3 buckets were corrupted. This is what happens when analyst push some bad search request, and have killed the splunkd deamo... See more...
Hi Splunkers, The origin of the problem was corrupted buckets. In my case 3 buckets were corrupted. This is what happens when analyst push some bad search request, and have killed the splunkd deamon of some indexers up and running during the decommissinning of one of them. Check : https://docs.splunk.com/Documentation/Splunk/Latest/Troubleshooting/CommandlinetoolsforusewithSupport#fsck I used the command (under the indexer where the bucket is and this indexer as to be stopped too) : >>> splunk fsck repair [bucket_path] [index] (use a "find /indexes/path | grep bucket_uid$ | grep [index's bucket]" to find his path) That fsck confirm the problem. In my case, the problem was not repairable. So the decision have been made to delete these buckets. The data were old, and very small, so the decision was made to delete them. After that evrything went back to normal. Problem solved. Thanks for the help
What happens you run the following command from <your_stack_url>/app/splunk-app-sfdc/search: | inputlookup lookup_sfdc_usernames Do you see any results? Do you have any duplicate definitions of LO... See more...
What happens you run the following command from <your_stack_url>/app/splunk-app-sfdc/search: | inputlookup lookup_sfdc_usernames Do you see any results? Do you have any duplicate definitions of LOOKUP-SFDC-USER_NAME under Settings > Lookups > Automatic Lookups with App: All and Owner: Any? When you search against sourcetype=sfdc:loginhistory, do you still see errors? You can view search logs from Job > Inspect Job. In search.log, search for LOOKUP-SFDC-USER_NAME to see additional context. To view logs from indexers, add noop to your search: index=your_index sourcetype=sfdc:loginhistory | noop remote_log_fetch=*
The screenshot shows an untruncated event.  What makes you believe the logs are getting truncated?  Please show a sanitized sample truncated event. Why are the events going from a Splunk HF to a sys... See more...
The screenshot shows an untruncated event.  What makes you believe the logs are getting truncated?  Please show a sanitized sample truncated event. Why are the events going from a Splunk HF to a syslog server instead of to a Splunk indexer?
Hi, First of all, thanks for helping me for this issue. I tried all the things you say but I have the same error.  - The file input.conf on my UF don't permit to configure the interface (I verifie... See more...
Hi, First of all, thanks for helping me for this issue. I tried all the things you say but I have the same error.  - The file input.conf on my UF don't permit to configure the interface (I verified the input.conf.spec file for verification). - My kernel is updated so the problem It's not from It. -And for the version, after verification, I have the last version of UF and Add-On available on Splunk base. - For the  Community Resources, I found one link that relate to this type of problem but there is no answer. I put the link here if you are interested :  https://community.splunk.com/t5/Deployment-Architecture/streamfwd-app-error-in-var-log-splunk-streamfwd-log/m-p/675366#M27880 If you have more indications to fix my issue I will be very grateful to here it.  
Please share some anonymised sample events to show what you are working with
We are using Splunk Universal Forwarder (UF) to forward logs from a Windows server to a Splunk Heavy Forwarder (HF). However, when the Splunk HF receives logs of a specific type as multiline, an ... See more...
We are using Splunk Universal Forwarder (UF) to forward logs from a Windows server to a Splunk Heavy Forwarder (HF). However, when the Splunk HF receives logs of a specific type as multiline, an issue arises. In this case, when attempting to forward these logs from the Splunk HF to a syslog server (a Linux server with rsyslog configuration), the logs are getting truncated. How can we address and resolve this issue?
I want to write a query whose purpose is to print for users who are not authorized to enter, and of course with the presence of a lookup table, the people who are authorized to enter are present in it.
Addon is also installed.
Hi there, Here's a breakdown of potential issues and solutions: 1. Regex Accuracy: Double-check that the regular expressions (REGEX) accurately match your expected data patterns. Test them th... See more...
Hi there, Here's a breakdown of potential issues and solutions: 1. Regex Accuracy: Double-check that the regular expressions (REGEX) accurately match your expected data patterns. Test them thoroughly using online regex testers or Splunk's rex command. Ensure the source and sourcetype fields contain the correct values for extraction. 2. FORMAT Order: The FORMAT field should use $1 to reference the first captured group from the regex, not $environment. Here's the corrected format: FORMAT = complaince_int_front::@service_$1 3. Transform Order: If both transforms are applied to the same data, consider their order. The environment_extraction transform might overwrite the service_extraction if it runs first. Adjust the order in transforms.conf if needed. 4. props.conf: Verify that props.conf correctly sets the _MetaData:Index field for indexing. 5. Troubleshooting Steps: Review Logs: Examine Splunk's internal logs for errors or warnings related to transforms. Test with Sample Data: Isolate issues by manually running transforms on sample data using the | command. Enable Debugging: Set DEBUG = true in [transforms] for detailed logging. Additional Tips: Consider using Splunk's indextime command for more flexible index-time transformations. Consult Splunk's documentation for in-depth guidance on transforms and regular expressions. Remember: Test changes thoroughly in a non-production environment before deploying to production. Regularly review and update transforms to ensure they align with evolving data patterns. ~ If the reply helps, a Karma upvote would be appreciated