All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @imam29  Just to check, have you restarted Splunk since making this change? Its also worth considering the following from the docs - How long did you wait to check for a timeout? What is your to... See more...
Hi @imam29  Just to check, have you restarted Splunk since making this change? Its also worth considering the following from the docs - How long did you wait to check for a timeout? What is your tools.sessions.timeout set to? The countdown for the splunkweb/splunkd session timeout does not begin until the browser session reaches its timeout value. So, to determine how long the user has before timeout, add the value of ui_inactivity_timeout to the smaller of the timeout values for splunkweb and splunkd. For example, assume the following: splunkweb timeout: 15m splunkd timeout: 20m browser (ui_inactivity_timeout) timeout: 10m The user session stays active for 25 minutes (15m+10m). After 25 minutes of no activity, the session ends, and the instance prompts the user to log in again the next time they send a network request to the instance. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi,   I'm having an issues parsing the SQL_TEXT field from oracle:audit:unified. When the field comes through it contains spurious text that isn't returned by the query using DBConnect and the orac... See more...
Hi,   I'm having an issues parsing the SQL_TEXT field from oracle:audit:unified. When the field comes through it contains spurious text that isn't returned by the query using DBConnect and the oracle:audit:unified template. For example: DBConnect grant create tablespace to test_splunk, Splunk grant create tablespace to test_splunk,4,,1,,,,,, The RAW event seems to come through as a CSV by virtue of the Oracle TA but we have a regex for the event extraction that looks like the below which seems to work in regex101: SQL_TEXT="(?<SQL_TEXT>(?:.|\n)*?)(?=(?:",\s\S+=|"$)) I know the data type is CLOD so I have tried to converting it using the substring command but I get the same result, any idea what is going on here?
I has been configure ui_inactivity_timeout = 15 but not working , the session still there not ask login again 
Hi @imam29  It sounds like you'll need to update the ui_inactivity_timeout setting under web.conf/[settings] which is the timeout for when the user does not interact with the browser, but there is a... See more...
Hi @imam29  It sounds like you'll need to update the ui_inactivity_timeout setting under web.conf/[settings] which is the timeout for when the user does not interact with the browser, but there is also another setting "tools.sessions.timeout" which is the actual session timeout value.  However, I think it would be worth reading this doc about setting session timeout so that you can satisfy yourself that you are updating the correct setting and not being overly permissive with timeouts. https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Configureusertimeouts#:~:text=In%20addition%20to%20setting%20the,The%20default%20is%2060%20minutes. In addition to managing in the config files, you can also update this in the UI if preferred: Click Settings in the upper right-hand corner of Splunk Web. Under System, click Server settings. Click General settings. In the Session timeout field, enter a timeout value. Click Save. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will    
Hi Giuseppe, yes I understand how Splunk stores its data in the indexes. But when I run the scripted input every hour, it creates 24 entries for one device entry from the target system. But with Sc... See more...
Hi Giuseppe, yes I understand how Splunk stores its data in the indexes. But when I run the scripted input every hour, it creates 24 entries for one device entry from the target system. But with Scripted Input I'm not just getting one entry back, I could be getting 200 entries back from the target system. And then 24 entries a day for 200 device entries is a lot, over a long period of time it takes a lot of space on the indexer. So I want to find a way to store the data from the scripted input on the indexer, but not store too many duplicates of the same device entries. FYI: With the Script for the Scripted Input I ask the API of an i-doit System, which is a Software for IT-Documentation to give me all of its stored device-entries. Thanks in advance.
How to set idle time, when the user has no activity for a long time, for example 15 minutes, then splunkweb will ask to log in again 
Well, to be quite precise, it's not a raw regex.  The docs say: Match expressions must match the entire name, not just a substring. Match expressions are based on a full implementation of Perl-compa... See more...
Well, to be quite precise, it's not a raw regex.  The docs say: Match expressions must match the entire name, not just a substring. Match expressions are based on a full implementation of Perl-compatible regular expressions (PCRE) with the translation of "...", "*", and "." Thus, "." matches a period, "*" matches non-directory separators, and "..." matches any number of any characters. So in case of wildcards it can get tricky. I'd try [source::/var/log/apple/...]
Hi @MrLR_02 , what's the issue to have the same data stored in the index with different timestamps? Forget your database approach Splunk isn't a database, An index isn't a database table where you... See more...
Hi @MrLR_02 , what's the issue to have the same data stored in the index with different timestamps? Forget your database approach Splunk isn't a database, An index isn't a database table where you store only the data you're using; you can read only the last hour data from your index having the last situation, the same of your appproach deleting events. then, in addition, if you need you can have the situation at a defined time changing the timepicker of your your search. Ciao. Giuseppe
Hi @MrLR_02  As @gcusello  said - you'd normally put daily data like this in an index, however if you really want to write to KV store then please see below Are you currently using the smi.event... See more...
Hi @MrLR_02  As @gcusello  said - you'd normally put daily data like this in an index, however if you really want to write to KV store then please see below Are you currently using the smi.eventWriter to send data to you index with a streamEvents method? If you have the session_key within your writer function then you should be able to use the inbuilt Splunk Python SDK to communicate with the KV Store. You'll need to initiate a new client using the session_key if you havent already got one within your method. I havent got an example to hand but I will see if i can find one however this pseudo code may help you towards a working code import splunklib.client as client import splunklib.modularinput as smi # Define your modular input class class MyModularInput(smi.Script): def get_scheme(self): scheme = smi.Scheme("My Modular Input") scheme.description = "Streams data to a Splunk KV Store" scheme.use_external_validation = False scheme.streaming_mode = smi.Scheme.streaming_mode_simple return scheme def stream_events(self, inputs, ew): # Iterate over each input stanza for input_name, input_item in inputs.inputs.items(): # Retrieve the session key session_key = inputs.metadata["session_key"] # Connect to Splunk using the session key service = client.connect(token=session_key) # Define the KV Store collection name collection_name = "your_kv_collection" # Data to be written to the KV Store data = { "key1": "value1", "key2": "value2" } # Access the KV Store collection collection = service.kvstore[collection_name] # Insert data into the KV Store try: collection.data.insert(data) ew.log("INFO", "Data successfully written to the KV Store.") except Exception as e: ew.log("ERROR", f"Failed to write data to the KV Store: {e}") # Run the modular input script if __name__ == "__main__": sys.exit(MyModularInput().run(sys.argv)) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Thanks for the answer. Currently I also store the output in an index, but since the scripted input is executed every hour and I don't want to have the same data stored in the index several times, I ... See more...
Thanks for the answer. Currently I also store the output in an index, but since the scripted input is executed every hour and I don't want to have the same data stored in the index several times, I empty the index completely after every hour. But now I have the problem that it can happen that the API query does not work, which would mean that no data from the system queried via API is available in Splunk. What solution can you recommend for this problem? Thanks in advance.
Hi @Praz_123 , you can use the correct searches of @livehybrid or a simple: index=_internal host IN (indexer1,indexer2) | stats count BY host | append [ | makeresults | eval host=indexer1, count=0 ... See more...
Hi @Praz_123 , you can use the correct searches of @livehybrid or a simple: index=_internal host IN (indexer1,indexer2) | stats count BY host | append [ | makeresults | eval host=indexer1, count=0 | fields host count ] | append [ | makeresults | eval host=indexer2, count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 eventually you can replace the append commands with a lookup containing the list of servers to monitor index=_internal host IN (indexer1,indexer2) | stats count BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe
You can also use an mstats query to query to _metrics index:   | mstats latest(_value) as val WHERE index=_metrics AND metric_name=spl.intr.disk_objects.Partitions.data.* by data.mount_point, metri... See more...
You can also use an mstats query to query to _metrics index:   | mstats latest(_value) as val WHERE index=_metrics AND metric_name=spl.intr.disk_objects.Partitions.data.* by data.mount_point, metric_name | rename data.mount_point as mount_point | eval metric_name=replace(metric_name,"spl.intr.disk_objects.Partitions.data.","") | eval {metric_name}=val | stats latest(*) as * by mount_point | eval free = if(isnotnull(available), available, free) | eval usage = round((capacity - free) / 1024, 2) | eval capacity = round(capacity / 1024, 2) | eval compare_usage = usage." / ".capacity | eval pct_usage = round(usage / capacity * 100, 2) | stats first(compare_usage) AS compare_usage first(pct_usage) as pct_usage by mount_point | rename mount_point as "Mount Point", compare_usage as "Disk Usage (GB)", pct_usage as "Disk Usage (%)"   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
You can also use an mstats query to query to _metrics index: | mstats latest(_value) as val WHERE index=_metrics AND metric_name=spl.intr.disk_objects.Partitions.data.* by data.mount_point, metric_n... See more...
You can also use an mstats query to query to _metrics index: | mstats latest(_value) as val WHERE index=_metrics AND metric_name=spl.intr.disk_objects.Partitions.data.* by data.mount_point, metric_name | rename data.mount_point as mount_point | eval metric_name=replace(metric_name,"spl.intr.disk_objects.Partitions.data.","") | eval {metric_name}=val | stats latest(*) as * by mount_point | eval free = if(isnotnull(available), available, free) | eval usage = round((capacity - free) / 1024, 2) | eval capacity = round(capacity / 1024, 2) | eval compare_usage = usage." / ".capacity | eval pct_usage = round(usage / capacity * 100, 2) | stats first(compare_usage) AS compare_usage first(pct_usage) as pct_usage by mount_point | rename mount_point as "Mount Point", compare_usage as "Disk Usage (GB)", pct_usage as "Disk Usage (%)" Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @SN1  Yes, the data is also in the _introspection index, so you can run the following search instead of using REST endpoints if you prefer - this also means you can track it easily over time if n... See more...
Hi @SN1  Yes, the data is also in the _introspection index, so you can run the following search instead of using REST endpoints if you prefer - this also means you can track it easily over time if needed too. index=_introspection host=macdev sourcetype=splunk_disk_objects | rename data.* as * | eval free = if(isnotnull(available), available, free) | eval usage = round((capacity - free) / 1024, 2) | eval capacity = round(capacity / 1024, 2) | eval compare_usage = usage." / ".capacity | eval pct_usage = round(usage / capacity * 100, 2) | stats first(fs_type) as fs_type first(compare_usage) AS compare_usage first(pct_usage) as pct_usage by mount_point | rename mount_point as "Mount Point", fs_type as "File System Type", compare_usage as "Disk Usage (GB)", pct_usage as "Disk Usage (%)" Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I have already checked, and the transform configuration is correct with no conflicts in other Splunk settings. Currently, to filter out sources properly, I have to explicitly define each depth of su... See more...
I have already checked, and the transform configuration is correct with no conflicts in other Splunk settings. Currently, to filter out sources properly, I have to explicitly define each depth of subdirectories using patterns like: [source::/var/log/apple/*] TRANSFORMS-null=discard_apple_logs [source::/var/log/apple/*/*] TRANSFORMS-null=discard_apple_logs This ensures that logs from different levels of subdirectories are included in the filtering process. it's quite strange that Splunk can't handle this scenario if that's the case. Use cases like mine should be fairly common, so I would expect a more straightforward way to handle this.
Hi @MrLR_02 , I use kv-Store only if I have to manage records (e.g. case management), for the other situations I prefer using indexes. Anyway, you can store data in kv-store running a scheduled sea... See more...
Hi @MrLR_02 , I use kv-Store only if I have to manage records (e.g. case management), for the other situations I prefer using indexes. Anyway, you can store data in kv-store running a scheduled search with outputlookup command at the end. Ciao. Giuseppe
Ah @bowesmana , I may have misunderstood the ask here, as you say. I used streamstats by test_name after sorting by (an assumed sequential) test_id. Although I'm not sure why there being the same tes... See more...
Ah @bowesmana , I may have misunderstood the ask here, as you say. I used streamstats by test_name after sorting by (an assumed sequential) test_id. Although I'm not sure why there being the same test_id for multiple test_name would affect the output here, as I'm not using the test_id in the streamstats? I may have missed something though (and not had coffee yet!) @dolj Please let us know how you are getting on, and if you clarify the requirement I'd be happy to help further and update the previously posted search if required Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
is there a possibility to this using index?
Hello, I have written a Python script that performs an API query from a system. This script is to be executed as scripted input at regular intervals (hourly). Is there a possibility that the output... See more...
Hello, I have written a Python script that performs an API query from a system. This script is to be executed as scripted input at regular intervals (hourly). Is there a possibility that the output of the script is stored in a Splunk KV store? So far I have only managed to save the output of the scripted input in an index. However, since this is data from a database that is updated daily, I think it would make sense to use the Splunk KV Store. Thanks in advance.
Hi @SN1  This is because the values from the endpoint are in MB but are being divided by 1024 twice in this search hence they become in TB.  try switching 1024/1024 for just 1024 in each occurrence... See more...
Hi @SN1  This is because the values from the endpoint are in MB but are being divided by 1024 twice in this search hence they become in TB.  try switching 1024/1024 for just 1024 in each occurrence and see if that resolves for you Will