All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In order for you to integrate with SQL data, you need to use the DB connect App as its designed for this purpose. You have to then configure it to communicate with the SQL server, this requires vari... See more...
In order for you to integrate with SQL data, you need to use the DB connect App as its designed for this purpose. You have to then configure it to communicate with the SQL server, this requires various services and other components and yes there are lots of small steps, but work through them slowly. The Change Data Capture sounds like any other table so you should be able to query it within the DB connect app and send that data to Splunk, once you have it configured. #Start here - Follow these steps carefully. This is really good documentation - ensure you configure for your environment SQL server. https://lantern.splunk.com/Splunk_Platform/Product_Tips/Extending_the_Platform/Configuring_Splunk_DB_Connect  #Install DB connect - This is typically installed onto a Heavy Forwarder (Splunk instance) Or for small environments you can install on a Search Head or All in one - but you may have performance issue should you be running lots searches, other splunk apps, and other functions etc.) The DB connect app cant be installed onto a UF. https://splunkbase.splunk.com/app/2686  #Docs https://docs.splunk.com/Documentation/DBX/3.17.1/DeployDBX/AboutSplunkDBConnect 
Hi Team, We use mongo db python script to get the logs into splunk We could see historical logs are getting  ingested, we changed the checkpoint value to 2024-05-03 08:46:13.327000 After that it... See more...
Hi Team, We use mongo db python script to get the logs into splunk We could see historical logs are getting  ingested, we changed the checkpoint value to 2024-05-03 08:46:13.327000 After that it worked fine for sometime, again we could see historical data getting ingested at 2 am How to fix it? What would be the checkpoint file value?
Current setup - Indexers --> F5 VIP --> CM  CM is seeing the requests are coming F5 VIP rather than actual source ip of the indexers. First indexer is connected successfully using the F5 VIP and th... See more...
Current setup - Indexers --> F5 VIP --> CM  CM is seeing the requests are coming F5 VIP rather than actual source ip of the indexers. First indexer is connected successfully using the F5 VIP and the latter requests from indexers are dropped as CM is seeing the connections coming from same Indexer (same F5 VIP). How did you configure the load balancer to see the actual source ip rather than the F5 VIP on the CM. Note: CM and Indexers are sitting on same network. I do not have much knowledge on the LB end. Any assistance is much appreciated.  
After pulling cases from ES to Phantom a certain label is assigned to the event , later it is automatically promoted to a case .  i have created an playbook that assign labels to the promoted cases ... See more...
After pulling cases from ES to Phantom a certain label is assigned to the event , later it is automatically promoted to a case .  i have created an playbook that assign labels to the promoted cases (based on the triggered splunk rule) and it works 99% of the times but sometimes i get 2 identical cases with different labels (the newly assign one and the one that is configured in the Splunk app). has anyone encountered this issue before ? 
We moved from Splunk Enterprise to Splunk Cloud a few years ago. To migrate all our objects we packaged all apps with the CLI package command and uploaded them to Splunk Cloud. This command merges ... See more...
We moved from Splunk Enterprise to Splunk Cloud a few years ago. To migrate all our objects we packaged all apps with the CLI package command and uploaded them to Splunk Cloud. This command merges everything from the local to the default folder as stated here: Package apps | Documentation | Splunk Developer Program  Unfortunately the consequence is that these objects are not editable via UI anymore. A number of changes don't apply, even though the UI doens't provide me an error. (e.g. re-assigning an orphaned search, or deleting an old object). To work around this issue, we asked for an export of the app via Splunk Support (there is no way of doing this via API as far as I can find) so we could change the app. But if we change the app and repackage it, than all local objects again will move to the default folder, making our problem in the future even worse. I always used the "package" CLI command which does this local to default folder merge. Is the Packaging Toolkit working in the same way? I don't have experience with it. If it is able to keep objects in the local folder, than it might save us... Any other idea to overcome this situation welcome as well... Thanks!
Hi @imarri , I have encountered this error before and I solved it by refreshing the credentials i.e the Api token. try entering a new token and see if it works.
Thanks @manjunathmeti  For some reason thought that would not work but it does!
Hi, hopefully this is the right place to ask. I am pretty new to MS SQL as well as Splunk, so am curious what is the simplest way to pipe MS SQL data (the Change Data Capture data/table in particular... See more...
Hi, hopefully this is the right place to ask. I am pretty new to MS SQL as well as Splunk, so am curious what is the simplest way to pipe MS SQL data (the Change Data Capture data/table in particular) to Splunk, and wondering if anyone here has done/tried it? I currently have Universal Forwarder set up on my Windows machine, and able to pipe Event Viewer stuffs to Splunk. Looked into Splunk DB Connect, but the setup process seems to be a little too complicated for me (installed Java, but not sure how to go from there). I am unsure if I am able to achieve what I want through Universal Forwarder (as my MS SQL uses Windows Authentication and from what I've read it says Windows Authentication is not supported in Universal Forwarder. Do correct me if I am wrong.). Appreciate any help.
Check splunkd.log and splunkd_ui_access.log for any errors/warnings.
Hi @abhishekpatel2 , try to create your search using Pivot, starting from your DataModel. Then see in Job Inspector the generated search, maybe there's an error in field names. Ciao. Giuseppe
You can refresh the Splunk configurations without restarting by running the URL: https://SPLUNK_DEPLOYMENT_SERVER:8000/debug/refresh
I tried that too, but in that I am getting no results.
Hi @jbv , you can use the now() funtion, for more infos see at https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFunctions You can try something like this: | makeresults... See more...
Hi @jbv , you can use the now() funtion, for more infos see at https://docs.splunk.com/Documentation/SCS/current/SearchReference/DateandTimeFunctions You can try something like this: | makeresults | eval current_time=now() | table current_time it's in epochtime, then you can convert in the format you like. Ciao. Giuseppe
Hello @jbv, You can get the current time using the now() function. By default it is returned in epoch  format only. You can use eval to call a field and you'll get what you want. | eval current_tim... See more...
Hello @jbv, You can get the current time using the now() function. By default it is returned in epoch  format only. You can use eval to call a field and you'll get what you want. | eval current_time_epoch=now()   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.!! 
Hi, Tre this : | inputlookup yourlookuo // Read data from the lookup file | search NOT $empty$ trigger_email=true // Filter for records with email trigger enabled | eval email_subject = "<field_Mo... See more...
Hi, Tre this : | inputlookup yourlookuo // Read data from the lookup file | search NOT $empty$ trigger_email=true // Filter for records with email trigger enabled | eval email_subject = "<field_MotherYear> - <field_Customer> - <field_Device>- <field_CheckName> - <field_SelfHealCount>-<field_Status>- <field_Timestamp>" // Construct subject using all fields subject = $email_subject // Use the dynamically generated subject
There are a couple of functions to returns time, now() which returns the time the search started, and time() which returns the time the function is executed. Both of these functions already return th... See more...
There are a couple of functions to returns time, now() which returns the time the search started, and time() which returns the time the function is executed. Both of these functions already return the time in epoch format.
It is not clear what your actual requirement is - Which avg are you want to compare to? The average VALUE for that time period (15m) across all cpus, or the average for that cpu across the whole time... See more...
It is not clear what your actual requirement is - Which avg are you want to compare to? The average VALUE for that time period (15m) across all cpus, or the average for that cpu across the whole time period? Assuming the former, a "standard" way of looking for a "fiddle factor" is to determine the standard deviation (for the VALUEs in the time period - 15m), and determine for each cpu how many stdevs the VALUE is above the mean. You might do this like this | eventstats mean(VALUE) as MeanV stdev(VALUE) as StDevV by _time | eval exceedFactor=if(VALUE > MeanV,(VALUE - MeanV)/StDevV, 0) | timechart values(exceedFactor) span=15m by cpu limit=0
no, that not right putting  the cpu into by clause for the stats command doesn't   give the mean value for cluster Its performing the stats  on the individual cpu's
Hi, Is there a way to get current time on Splunk and then convert it to epoch? Im trying to create a dashboard to show inactivity from my data sources and plan to use info from | metadata command.
Hi @sarlacc , at first it isn't a good idea to have the Deployment server on Indexers or Search Heads, have you another server? You can use a shared (with other roles) server only if the DS has to ... See more...
Hi @sarlacc , at first it isn't a good idea to have the Deployment server on Indexers or Search Heads, have you another server? You can use a shared (with other roles) server only if the DS has to manafe up to 50 clients, more it requires a dedicated server. About data, at first check the TIME_FORMAT: what's the format of your date: european (dd/mm/yyyy) or american (mm/dd/yyyy), by default Splunk uses the american format. About the only Universal Forwarder with issues, have you internal Splunk logs (_* indexes)? if yes, it's an issue of that data source, if not there's a connection issue. Ciao. Giuseppe