All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @tungpx , the usual way to see if a Forwarder configuration is updated is to chech if updates are running or not, but anyway you could try to create an index time field with the update version an... See more...
Hi @tungpx , the usual way to see if a Forwarder configuration is updated is to chech if updates are running or not, but anyway you could try to create an index time field with the update version and check it. This is a description about how to do it: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Configureindex-timefieldextraction Ciao. Giuseppe
hi @Kenny_splunk , I agree with @PickleRick , you should try to describe what you did to try to understand what happend. Anyway, probably the issue is in the moved folders. But if you deleted the ... See more...
hi @Kenny_splunk , I agree with @PickleRick , you should try to describe what you did to try to understand what happend. Anyway, probably the issue is in the moved folders. But if you deleted the installation, it's very difficoult to recover the installation, unless you can restore a backup. Maybe (and I say maybe) Splunk Support can help you. Anyway, to tra a last chance, you could try to move the indexes from the now position to a new safe one and then create a new fresh installation that should run. Then you could stop Splunk and copy the saved indexes folders to the new position of $SPLUNK_DB (by default $SPLUNK_HOME/var/log/splunk), or change the value of $SPLUNK_DB pointing to the new position of indexes. Then, at least, you should create all the stanzas of your indexes in one indexes.conf using exactly the same names of your indexes. In this way it should run, let us know if you solved. Ciao. Giuseppe
Hi @tej57, Thanks for your answer. Hope this functionality will be realised soon. It's a reason for us to keep using the classic version for most of the dashboards.
Thanks, but I already tried that and does not work.
Ok. You did something. And now your environment somehow doesn't work. Not knowing that something and somehow (and not even knowing what version we're talking about; I can only assume we're talking L... See more...
Ok. You did something. And now your environment somehow doesn't work. Not knowing that something and somehow (and not even knowing what version we're talking about; I can only assume we're talking Linux version) how are we supposed to know what's going on and how to fix it?
Hello, I have a deployment server and deploy an app on an Universal Forwarder, like I usually do (Create an app folder -> create local folder -> write input.conf -> setup app, server class on DS, ti... See more...
Hello, I have a deployment server and deploy an app on an Universal Forwarder, like I usually do (Create an app folder -> create local folder -> write input.conf -> setup app, server class on DS, tick disable/enable app, tick restart Splunkd). But after make sure the log path and permission of the log file (664), I don't see the log forwarded.  I'm only manage the Splunk Deloyment but not the server that host universal forwarder so I asked the system team to check it for me. After sometime, they get back to me and said there is no change on the input.conf file. They have to manually restart splunk on the Universal Forwarder and after that I see the log finally ingested.  So I want to know if there is an app, or a way to check if the app or the input.conf was changed according to my config on the DS or not, I can't ask the system team to check for it for me all time time.  Thank you. 
I have checked in splunkd.log, haven't noticed any particular error or warning related to this. also web-ui log  
Hello, We are experiencing an issue with the SOCRadar Threat Feed app in our Splunk cluster. The app is configured to download threat feeds every 4 hours; however, each feed pull results in duplicat... See more...
Hello, We are experiencing an issue with the SOCRadar Threat Feed app in our Splunk cluster. The app is configured to download threat feeds every 4 hours; however, each feed pull results in duplicate events being downloaded and indexed. We need assistance in configuring the app to prevent this duplication and ensure data deduplication before being saved to the indexers.
Haven't noticved any particular error in splunkd.log / UI-access logs   
Since your last update on 21 Oct 2016 stating that Splunk Enterprise Security does not support multi-tenancy, what is the status right now? Does Splunk Enterprise Security is now support multi-tenancy?
This worked! Much appreciated, thank you.
Thank you for your reply. UDP 514 port was in use. I have  no idea why it is used by another process. So, I needed to use another port to receive packets from palo alto server. However I solved thi... See more...
Thank you for your reply. UDP 514 port was in use. I have  no idea why it is used by another process. So, I needed to use another port to receive packets from palo alto server. However I solved this problem. The firewalld daemon was blocking the packets coming in Splunk. I stopped the firewalld, and could search the palo alto logs. I go for the next step of issuing alerts from these logs.
Try these props.conf settings. [dolphin] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)\d\d:\d\d:\d\d\d DATETIME_CONFIG = current
Hey guys, so i was basically trying to set up Splunk to work with terminal (bad idea). ended up moving directories using the CLI and boom! doesn't work anymore, and i have no way to undo in the chan... See more...
Hey guys, so i was basically trying to set up Splunk to work with terminal (bad idea). ended up moving directories using the CLI and boom! doesn't work anymore, and i have no way to undo in the change via terminal. i tried deleting and redownloading from Splunk but doesnt work. please tell me someone has an answer or a way to reset the directories for the version i once had i had so much data and apps to practice with. P.S. even if there isnt a way to get my old version back, i still would like to know why its not working when i try to redownload a new instance.
I am setting up a monitor on the log file for my Dolphin Gamecube emulator. Dolphin and Splunk Enterprise are both running locally on my machine (Windows 11). Splunk is ingesting multiple lines per e... See more...
I am setting up a monitor on the log file for my Dolphin Gamecube emulator. Dolphin and Splunk Enterprise are both running locally on my machine (Windows 11). Splunk is ingesting multiple lines per event, and my hope is to get each line to ingest as a separate event. I have tried all kinds of different props.conf configurations including SHOULD_LINEMERGE, LINE_BREAKER, BREAK_ONLY_BEFORE, etc. I'll paste a sample of the log file below. In this example, Splunk is ingesting lines 1 & 2 as an event, and then 3 & 4 as an event. When I turn on more verbose logging, it will lump even more lines into an event, sometimes 10+ 21:23:310 Common\FileUtil.cpp:796 I[COMMON]: CreateSysDirectoryPath: Setting to C:\Users\whjar\mnt\file-system\opt\dolphin\dolphin-2409-x64\Dolphin-x64/Sys/ 21:23:323 DolphinQt\Translation.cpp:155 W[COMMON]: Error reading MO file 'C:\Users\whjar\mnt\file-system\opt\dolphin\dolphin-2409-x64\Dolphin-x64/Languages/en_US.mo' 21:24:906 UICommon\AutoUpdate.cpp:212 I[COMMON]: Auto-update JSON response: {"status": "up-to-date"} 21:24:906 UICommon\AutoUpdate.cpp:227 I[COMMON]: Auto-update status: we are up to date.  
No, the anchor is the pattern for the place in the text that you want to appear before and/or after the field you want extract. For example, if your event contain "Event of type X with user id: abc12... See more...
No, the anchor is the pattern for the place in the text that you want to appear before and/or after the field you want extract. For example, if your event contain "Event of type X with user id: abc123" and you wanted to extract the user id, you regex might be something like "X.* user id: (?<userid>\w+)". The "user id: " part would be the anchor for the field you are going to extract. You could also argue that the "X" is also an anchor as it ensures that the pattern will only match if the field being extracted from contains "X".
https://ideas.splunk.com/ideas/PLECID-I-670
Need help passing a token value from a Single Value Panel using the ( | stats count) in conjuction to the ( | rex field= _raw) command to a Stats Table panel.  I created a dashboard showing various "... See more...
Need help passing a token value from a Single Value Panel using the ( | stats count) in conjuction to the ( | rex field= _raw) command to a Stats Table panel.  I created a dashboard showing various "winevent" logs for user accounts (created, enabled, disabled, deleted, etc...)  Current search I have for my various Single Value panel using the stats command in my search is seen below. (for this example, I used the win event code 4720 to count of "User Account Created" on the network) and extracted the EventCode. Acct Enable: index="wineventlog " EventCode=4720 | dedup user | _rex=field _raw "(?m)EventCode=(?<eventcode>[\S]*)" | stats count Output gives me a Single Value Count for window event codes that = 4720 ignoring duplicate user records.    I am now trying to capture the extracted "eventcode" using a drilldown in a token for each respective count panel.  I have setup the token as: (Set $token_eventcode$ = $click.value$) in my drill down editor in my second query table.  Using that token, I want to display the respective records in a second query panel to display the record(s) info in a table as seen below:    Acct Enable: index="wineventlog " EventCode=$token_eventcode$ | table _time, user, src_user, EventCodeDescription As I am still learning how to use the rex command, having problems in this instance in capturing the EventCode from the _raw logs, setting it to the ($token_eventcode$) token in the Single Value County query and passing that value down through a token to the table while maintaining the stats count value.  Any assistance with be greatly appreciated.
Yeah i been testing on regex 101 seem to be some delta in how splunk processes the regex however. For example this is what i have so far https://regex101.com/r/95JbuG/1  but when i add another ... See more...
Yeah i been testing on regex 101 seem to be some delta in how splunk processes the regex however. For example this is what i have so far https://regex101.com/r/95JbuG/1  but when i add another event to this the regex stops working
In Regex 1, you seem to have .* backwards (*.) in two instances, where the one near the end is particularly problematic, so if you have: (%%1936|TokenElevationTypeDefault|TokenElevationTypeLimited... See more...
In Regex 1, you seem to have .* backwards (*.) in two instances, where the one near the end is particularly problematic, so if you have: (%%1936|TokenElevationTypeDefault|TokenElevationTypeLimited)*. Then it will match strings like %%1936, 0 or more times, so it will match events which don't include %%1936 or the other strings.  Try removing the *. near the end. Also I recommend testing the regex on a site like regex101.com to make sure your regex is working before you put it in your splunk config.