All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @deepakc  I was checking if i can use SECMD to remove that blank event . However i am not sure how to use it ?  Or try this  https://community.splunk.com/t5/Splunk-Search/Why-is-the-regex... See more...
Hi @deepakc  I was checking if i can use SECMD to remove that blank event . However i am not sure how to use it ?  Or try this  https://community.splunk.com/t5/Splunk-Search/Why-is-the-regex-creating-empty-events-from-incoming-data/m-p/396432  The previous event ends with a "." so can i try the above method ? 
I suspect that the MSSQL TA is normally supported and works in-conjuncton with the DB addon for sourcetype formatting (So it uses SQL queries and then does the props and transforms part, hence you’re... See more...
I suspect that the MSSQL TA is normally supported and works in-conjuncton with the DB addon for sourcetype formatting (So it uses SQL queries and then does the props and transforms part, hence you’re not seeing and values. So, I suspect it’s not being parsed correctly.   For the Windows Logs you normally use the Windows TA which contains the props and transforms from the standard Windows Events channels(app/information/security etc) and the TA contains the parsing code.  I don’t have a test environment, so can't check, but you could try. Change your sourcetype as there is a typo = mssql:aud to sourcetype = mssql:audit - and see of that works. Perhaps set renderXml = true in the inputs.conf with new sourectype mssql:aud:xml and create a props.conf with the mssql:aud:xml sourcetype add KV_MODE=xml (this is just a try and see without testing) If that doesn't work then stick with the DB connect solution.
I tried this but got an error Error in 'EvalCommand': Failed to parse the provided arguments. Usage: eval dest_key = expression. The search job has failed due to an error. You may be able view the ... See more...
I tried this but got an error Error in 'EvalCommand': Failed to parse the provided arguments. Usage: eval dest_key = expression. The search job has failed due to an error. You may be able view the job in the
You say you didn’t have a TA (props and transforms on the HF before), normally as the HF is full Splunk instances you should have the TA there for parsing, also known as cooked data and before it rea... See more...
You say you didn’t have a TA (props and transforms on the HF before), normally as the HF is full Splunk instances you should have the TA there for parsing, also known as cooked data and before it reaches the indexer, if you was sending direct then the TA on Indexer will suffice. (Why it worked before the upgrade I don’t know, but the as to the upgrade path, you should always follow the path, as this can often introduce breaking changes, so could be a factor.  I would try to deploy your custom TA(props code)  onto the HF and see if that makes a difference, as you already have this TA deployed to the current SH/IDX, you should be able to continue with normal field extractions, once it sees the sourcetype.    So, ensure the code for this data source lives perhaps in a custom TA or copy the code as it is in the SH/IDX and config or deploy that to the HF + restart. Tip for consistency keep the code in one Custom TA app is best practise, otherwise use /local/props.conf.
how to allow splunk to access public.if i am using splunk from diffrent gateway then what will i have to do to use the splunk web.
HTTP Event Collector examples - Splunk Documentation need troubleshooting suggestion if possible/ available with user access
Post metric according to spec Get metrics in from other sources - Splunk Documentation to HEC.  API reports back HTTP 200, "success".  Stats not viewable in Analytics area. Unable to view index=_in... See more...
Post metric according to spec Get metrics in from other sources - Splunk Documentation to HEC.  API reports back HTTP 200, "success".  Stats not viewable in Analytics area. Unable to view index=_internal, and have no access to main system.  Wanted to troubleshoot. When using the same endpoint for logs, the endpoint is fully functional.   Can anyone provide suggestions to troubleshoot?
Hi all fellow Splunkthiasts, I need your help in understanding what could possibly go wrong in HFW upgrade. We are collecting some logs through HFW and rely on search-time extractions for this sourc... See more...
Hi all fellow Splunkthiasts, I need your help in understanding what could possibly go wrong in HFW upgrade. We are collecting some logs through HFW and rely on search-time extractions for this sourcetype.  After upgrading HFW, extractions stopped working. Data format is basically several Key=Value pairs separated by newline. Investigating this, I found out: There is no props.conf configuration for this sourcetype on upgraded HFW (and it wasn't there even before upgrade). All relevant configuration is on another instance serving as both indexer and search-head. Props.conf in relevant sourcetype on Search-head has AUTO_KV_JSON=true, I don't see KV_MODE in splunk show config props / splunk btool props list (I suppose it takes default value "auto") I have already realized that upgrade didn't take the right path (it was 7.2.6 upgraded to 9.2.1, without gradually upgrading through 8.2). Except search-time extractions, everything seems to work as expected (data is flowing in, event breaking and timestamp extraction seems to be correct). What I don't understand is how can HFW upgrade even affect search-time extractions on another instance. From there I am a bit clueless on what to focus on to fix this. What I don't understand
Hi Team, There is a requirement  to get the license usage split in GB on daily basis for the top 20 log sources along with the host, index and sourcetype details.  So kindly help with the query.
Hello, We have some AWS accounts that use Firehose to forward logs from AWS to Splunk. A few days ago, I received a notification that the number of channels acquired started rapidly and hit the limi... See more...
Hello, We have some AWS accounts that use Firehose to forward logs from AWS to Splunk. A few days ago, I received a notification that the number of channels acquired started rapidly and hit the limit, since then we are not able to send logs any longer to Splunk. Splunk Support helped us to use a new Firehose endpoint, but we still see the ServerBusy because of the limited channels. Is there any option to monitor how our Streams are consuming the Channels and any advice you might have to improve this behaviour?
Hi @gcusello  i think i found my problem. I don't have open 9997 port on my forwarder server i guess. this is the screenshot. How can i open the 9997 port
Hi @madhav_dholakia  I'm not sure please remove the last line of my query , I mean this : subject = $email_subject // Use the dynamically generated subject then in the subject box in "Edit Aler... See more...
Hi @madhav_dholakia  I'm not sure please remove the last line of my query , I mean this : subject = $email_subject // Use the dynamically generated subject then in the subject box in "Edit Alert " put this : Alert: $email_subject$ OR $email_subject$  If it doesn't work put an image from edit alert section of your alert hear    
Ok. Now i run the telnet <my-forwarders-ip> 9997 command from my windows pc The result is "could not open connection to the host. port 9997 .connect failed"    i run for both private ip and public... See more...
Ok. Now i run the telnet <my-forwarders-ip> 9997 command from my windows pc The result is "could not open connection to the host. port 9997 .connect failed"    i run for both private ip and public ip. My windows firewall is disabled and my forwarders server doesn't even have firewall installed  
Hi all, I am trying to integrate MS SQL audit log data with a UF instead of DB Connect. What is the best and recommended way to do it that maps all fields? At the moment it is integrated with the ... See more...
Hi all, I am trying to integrate MS SQL audit log data with a UF instead of DB Connect. What is the best and recommended way to do it that maps all fields? At the moment it is integrated with the UF and using the "Splunk Add-on for Microsoft SQL Server" Also i am seeing one additional dummy event(no values nothing blank event) with every event that is coming My Inputs.conf    [WinEventLog://Security] start_from = oldest current_only = 0 checkpointInterval = 5 whitelist1 = 33205 index = test_mssql renderXml=false sourcetype = mssql:aud disabled = 0    
Hi @Cyner__ , you should run the telnet from the client, not from the Server: telnet my-private-ip 9997 If it doesn't answer there's something in the middle (e.g. personal firewalls) that block th... See more...
Hi @Cyner__ , you should run the telnet from the client, not from the Server: telnet my-private-ip 9997 If it doesn't answer there's something in the middle (e.g. personal firewalls) that block the connection. Ciao. Giuseppe
When you complete the setup it will create a local/passwords.conf. you're saying you don't have a passwords.conf so did you complete the setup? If so, did you set the default org/key or did you nam... See more...
When you complete the setup it will create a local/passwords.conf. you're saying you don't have a passwords.conf so did you complete the setup? If so, did you set the default org/key or did you name it something other than default? When you have default set, you don't have to specify the org or key with the openai/ChatGPT command.
ah also when i clicked "data summary" button from splunk enterprise web,  i only see "waiting for results" 
Following two error repeats every minute in splunkd.log on Splunk Enterprise What is causing this?   06-07-2024 10:45:00.314 +0200 ERROR ExecProcessor [2519201 ExecProcessorSchedulerThread] - mess... See more...
Following two error repeats every minute in splunkd.log on Splunk Enterprise What is causing this?   06-07-2024 10:45:00.314 +0200 ERROR ExecProcessor [2519201 ExecProcessorSchedulerThread] - message from "/data/splunk/bin/python3.7 /data/splunk/etc/apps/search/bin/quarantine_files.py" Quarantine files framework - Unexpected error during execution: Expecting value: line 1 column 1 (char 0) 06-07-2024 10:45:00.314 +0200 ERROR ExecProcessor [2519201 ExecProcessorSchedulerThread] - message from "/data/splunk/bin/python3.7 /data/splunk/etc/apps/search/bin/quarantine_files.py" Quarantine files framework - Setting enable_jQuery2 - Unexpected error during execution: Expecting value: line 1 column 1 (char 0)    
Thanks for the help @gcusello  But my problem is still occurs. When i use telnet with 9997 port to my computer (tried both private and public ip) telnet runs "connection timed out" error. i alread... See more...
Thanks for the help @gcusello  But my problem is still occurs. When i use telnet with 9997 port to my computer (tried both private and public ip) telnet runs "connection timed out" error. i already enabled receiving. I don't know if i enabled forwarder or not bu i Start'ed it with command and configured output and input file   This is inputs.conf: [monitor:///home/cowrie/cowrie/var/log/cowrie/cowrie.json] index = cowrie sourcetype = json disabled = false   this is output.conf: [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false server = my-private-ip:9997   sorry if i missed something as i said im both new to linux and splunk
version 3.31