All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In the Gui >  Data > Data availability - Click on the Green Base Line Search Button, that will generate the look up, you can then go back to the Data availability and it should display results.   
I believe you don't have to escape the double quotes. Check the examples in the docs: https://docs.splunk.com/Documentation/Splunk/9.2.1/admin/macrosconf
Could this contribute to the slow performance when search for Knowledge Objects in our deployment? We have over 2000 user directories in $SPLUNK_HOME/etc/users on our SHs, representing every user who... See more...
Could this contribute to the slow performance when search for Knowledge Objects in our deployment? We have over 2000 user directories in $SPLUNK_HOME/etc/users on our SHs, representing every user who ever existed since we started with Splunk. When we run Settings -> Searches, Report and Alerts it can take over a minute to find a search.
Hi, do you have a distributed architecture or just single instance? Did you set the volume settings on indexes.conf? [volume:primary] path = /path/to/storage/partition maxVolumeDataSizeMB = 500000... See more...
Hi, do you have a distributed architecture or just single instance? Did you set the volume settings on indexes.conf? [volume:primary] path = /path/to/storage/partition maxVolumeDataSizeMB = 5000000  
Splunk has to pull data from somewhere, logs or API,  if you missed the scan, then the data should reside somewhere in the Nessus system.   You would need to look at the inputs configuration and s... See more...
Splunk has to pull data from somewhere, logs or API,  if you missed the scan, then the data should reside somewhere in the Nessus system.   You would need to look at the inputs configuration and see if there's an option based on the inputs to collect historical data from the Nessus system.  I would start by referring the the  documentation and see if that can help you.  https://docs.tenable.com/integrations/Splunk/Content/Splunk2/CreateInput.htm  
Hello team , I am trying to create macro and than use in my splunk dashboard . The purpose is to get time of entered input in dashboard (in only UTC standard) irrespective of user’s time setting in ... See more...
Hello team , I am trying to create macro and than use in my splunk dashboard . The purpose is to get time of entered input in dashboard (in only UTC standard) irrespective of user’s time setting in Splunk.  My macro is : [strftime_utc(2)] args = field, format definition = strftime($field$ - (strptime(strftime($field$, \"%Y-%m-%dT%H:%M:%SZ\"), \"%Y-%m-%dT%H:%M:%S%Z\")-strptime(strftime($field$, \"%Y-%m-%dT%H:%M:%S\"), \"%Y-%m-%dT%H:%M:%S\")), \"$format$\")  and now my search looks like: *My query* | eval utc_time=`strftime_utc(_time, "%Y-%m-%dT%H:%M:%SZ")` So that always get the output in UTC standard only. But I am getting below error:  Error in 'eval' command: The expression is malformed. An unexpected character is reached at '\"%Y-%m-%dT%H:%M:%SZ\"), \"%Y-%m-%dT%H:%M:%SZ\") - strptime(strftime(_time, \"%Y-%m-%dT%H:%M:%S\"), \"%Y-%m-%dT%H:%M:%S\")), \"%Y-%m-%dT%H:%M:%SZ\"))'. How can i resolve ? Any help is appreciated. Thanks
I suspect that they have made some changes to the TA add-on code and python scripts  universal_session.py I would contact them directly and see if you can get any further information. Disabling come... See more...
I suspect that they have made some changes to the TA add-on code and python scripts  universal_session.py I would contact them directly and see if you can get any further information. Disabling comes with security risks,  and most likely done within the python code. But I understand you have self signed ones,  and should have options, so seeking their advise might be the best cause of action, hopefully they can get the TA developer to give you further help.   support@nozominetworks.com.    
Hi, can you paste your confs here? Usually the proper way of doing it would look something like this: transforms.conf [filter_some_events] REGEX = <regex_that_matches_the_events_you_want> DEST_KEY ... See more...
Hi, can you paste your confs here? Usually the proper way of doing it would look something like this: transforms.conf [filter_some_events] REGEX = <regex_that_matches_the_events_you_want> DEST_KEY = _MetaData:Index FORMAT = <your_index> props.conf [<sourcetype_stanza>] ...other_props_configs... TRANSFORMS-filter_name = filter_some_events  
When navigating to "ESS" -> "Data" -> "Data Availability", will get the following error: >>> Error in 'lookup' command: Could not construct lookup 'SSE-data_availability_latency_status.csv, product... See more...
When navigating to "ESS" -> "Data" -> "Data Availability", will get the following error: >>> Error in 'lookup' command: Could not construct lookup 'SSE-data_availability_latency_status.csv, productId'. See search.log for more details. <<< I can find the definition of  SSE-data_availability_latency_status in "lookup" -> "lookup definitions". However, it looks the SSE-data_availability_latency_status.csv doesn't exist. >>> | inputlookup SSE-data_availability_latency_status.csv --> The lookup table 'SSE-data_availability_latency_status.csv' requires a .csv or KV store lookup definition. <<< I'm using Splunk cloud 9.1.2312.102 and ESS 3.8.0. Thanks for your reply in advance!  
Hi @dmitch , Thank you for answering.  I had already tested that in Staging and it works. However, we need the integration with Splunk Cloud Platform in PROD, so we cannot skip TLS verification as ... See more...
Hi @dmitch , Thank you for answering.  I had already tested that in Staging and it works. However, we need the integration with Splunk Cloud Platform in PROD, so we cannot skip TLS verification as it could be a security risk.   Is it possible to fix this issue on Splunk side? Sign the Trial version "prd-p-e7xnh.splunkcloud.com:8088" with the same certificate that the Paid version "prd-p-e7xnh.splunkcloud.com:443".  We would really appreciate this fix from Splunk.   The rest of observability backend that we have tested have public CA certificate in the target endpoint for Trial Account.    Thank you in advance.  Antonio
I'd like to monitor log files and ingest specific lines from these files. My props.conf and transforms.conf has no error. But for some reason the props.conf is not working and instead of indexing spe... See more...
I'd like to monitor log files and ingest specific lines from these files. My props.conf and transforms.conf has no error. But for some reason the props.conf is not working and instead of indexing specific lines , it is indexing the whole log. Is there any specific path to place .conf files, or any other solution?
We had a Nessus scan but Nessus configuration was not completed on tenable add-on on the splunk side. Hence we missed the scan and data was not onboarded to splunk now we want to get data back , so h... See more...
We had a Nessus scan but Nessus configuration was not completed on tenable add-on on the splunk side. Hence we missed the scan and data was not onboarded to splunk now we want to get data back , so how should we get that data back?
Hello everyone, We are currently running Splunk Enterprise version 9.0.6 on a Windows Server 2016 machine as part of a distributed Splunk environment. Due to compliance requirements, we need to upgr... See more...
Hello everyone, We are currently running Splunk Enterprise version 9.0.6 on a Windows Server 2016 machine as part of a distributed Splunk environment. Due to compliance requirements, we need to upgrade to at least version 9.1.4. However, Splunk Enterprise 9.1.4 officially lists Windows Server 2019 as a prerequisite. I have tested the upgrade in our lab environment on Windows Server 2016, and it appears to work without any immediate issues. Despite this, I am concerned about potential unforeseen impacts or compatibility problems since the official documentation recommends Windows Server 2019. Additionally, our OS team has advised that upgrading the OS from Windows Server 2016 to 2019 could potentially corrupt the servers, necessitating a rebuild. My boss is understandably reluctant to take this risk, especially since the current server is planned for retirement by the end of this year. Has anyone else performed a similar upgrade on Windows Server 2016 within a distributed Splunk environment? Are there any known issues or potential risks we should be aware of? Any insights or experiences would be greatly appreciated.
Hello @kackerman7  I'm sharing the POC details I worked on a few years ago. If you are using client-server architecture for your External React Application, I suggest checking my architecture which ... See more...
Hello @kackerman7  I'm sharing the POC details I worked on a few years ago. If you are using client-server architecture for your External React Application, I suggest checking my architecture which easily integrates with your existing architecture.   Just go through it, try it in your local lab, and let me know if you need more help.    I hope this will help you. Thanks KV An upvote would be appreciated if any of my replies help you solve the problem or gain knowledge.  
You could use a KVStore with fields "received_date", "file_date", and "company_id". See https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/ConfigureKVstorelookups  Once your KVSto... See more...
You could use a KVStore with fields "received_date", "file_date", and "company_id". See https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/ConfigureKVstorelookups  Once your KVStore lookup is defined, you could use it like this: index=wealth | search transform-file | search ace_message | rex field=_raw "inputFileName: (?<inputFileName>.*?)," | rex field=_raw "outputFileName: (?<outputFileName>.*?)," | rex field=inputFileName "file\_\d+\_(?<CompanyId>\d+)\_" | rex field=inputFileName "file\_(?<Date>\d+)\_" | table inputFileName,outputFileName, CompanyId, Date | lookup received_files_lookup file_date as Date, company_id as CompanyId | where received_date>(now()-(60*60*24*30)) Your alert can trigger if this search returns any rows of data. You will also need a corresponding mechanism to store any new files in the KVStore: index=wealth | search transform-file | search ace_message | rex field=_raw "inputFileName: (?<inputFileName>.*?)," | rex field=inputFileName "file\_\d+\_(?<company_id>\d+)\_" | rex field=inputFileName "file\_(?<file_date>\d+)\_" | table company_id, file_date | eval received_date=now() | outputlookup received_files_lookup append=true
It's okay to repeat yourself. Your comment suggests you may not understand, and that's okay too. To give you a hint, RPMs are - as we know - signed manifests and content, which includes overlay files ... See more...
It's okay to repeat yourself. Your comment suggests you may not understand, and that's okay too. To give you a hint, RPMs are - as we know - signed manifests and content, which includes overlay files and scripts. The format allows for very detailed specification on what's required and all its dependencies. Yum will take those requirement specs and, since we know it's identical, repeatedly install exactly what we require, over and over again. It's consistent, and in a verifiable way. Not there yet? OK. You should know: this idea that YUM === "blindly installing anything in prod without assessment and no other workflow is possible" is verrrrrry nai--uh, simplistic. It's possible, sure; same as without it. Every tool can be used poorly. But using it properly really opens up some adequate features. And we'd like Splunk to be adequate. Here's the water, if it wants to drink. I *do* install a lot of things automatically. When working on the largest single-owner intranet in the world, careful automation helps. When I promote a version of software, I know it's going to get installed on all my hosts exactly as I want by specifying a nevra. This has been possible-- no, scratch that. This has been reliably consistent in a verifiable way with an excellent (simulated) rollback mechanism for 25+ years. People born AFTER this was a proven feature have learned to crawl, walk, run, add, multiply, converse, demonstrate, compete,learn, love, graduate and excel in a field; all in that time. People born after this feature was a feature could have learned this feature while looking after their own newborn children. EVERY competitor to Splunk figured it out in that time. Splunk has a willing army of volunteers who'd love to show them, I'm sure, but who also remain a valuable resource completely untapped. I hope Splunkisco can learn more about it and catch up to 1999. But look at the time: it's almost 5 months to the 13th birthday. See ya there!
I issued the following command on client server, heading for VIP: $ ssh splunk@10.1.2.10 ####################################### ### pspfwd01 ### ####################################### ********... See more...
I issued the following command on client server, heading for VIP: $ ssh splunk@10.1.2.10 ####################################### ### pspfwd01 ### ####################################### ****************************************************************** WARNING: To protect the system from unauthorized use and to ensure that the system is functioning properly, activities on this system are monitored and recorded and subject to audit. Use of this system is expressed consent to such monitoring and recording. Any unauthorized access or use of this Automated Information System is prohibited and could be subject to criminal and civil penalties. ****************************************************************** splunk@10.1.2.10's password: Heavy forwarder 1 [pspfwd01 -- 10.1.2.11] replied...
There are two heavy forwarders at our site.  The current setup is that there is a VIP defined for client server access. Here is example for IP definitions: heavy forwarder 1 [10.1.2.11] heavy forw... See more...
There are two heavy forwarders at our site.  The current setup is that there is a VIP defined for client server access. Here is example for IP definitions: heavy forwarder 1 [10.1.2.11] heavy forwarder 2 [10.1.2.12] VIP [10.1.2.10] When client server wants to forward monitoring data to splunk, it just pointed to the VIP, 10.1.2.10. However, I could not find the IP [10.1.2.10] on client server and both heavy forwarders by issuing ifconfig OS command. How was the VIP defined?  There is no load balancer in front of both heavy forwarders.  
setting "allowRemoteLogin" in server.conf did allow default password and then I changed the password using above ./splunk edit user ... Thanks.
Growing a bit exasperated with the issue that Im facing while integrating Splunk with Duo admin api, seeing the following error right form the get go during the initial configuration. EOF occured in ... See more...
Growing a bit exasperated with the issue that Im facing while integrating Splunk with Duo admin api, seeing the following error right form the get go during the initial configuration. EOF occured in violation of protocol (_ssl. c:1106). I have not seen it before and its even stranger as there is no connectivity issues, a curl to my api host shows connectivity is fine, no problem there. TLS handshake is successful. TCP dump shows that it was able to reach out to Duo cloud's IP,  here's a screenshot of the error preventing me from proceeding   The error is happening at intial setup, its so hard to determine why with no information or logs to go off... is anyone familiar with this?