All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I would like to add a line break in the label in order to have the full title as well as the value and the percentage Actually : Desired outcome : End of SPL used : .... | eve... See more...
I would like to add a line break in the label in order to have the full title as well as the value and the percentage Actually : Desired outcome : End of SPL used : .... | eventstats sum(tache) as total_tache | eval percent = round((tache/total_tache)*100,2) | eval DR=DR." (".'tache'.")".",".'percent'."%" | rex mode=sed field=DR "s/,/\n/g" I tried to use the command "sed", it works in a table but not in a pie chart. Can you help me ?
Hi, As part of my search, I'm building some strings with eval and assigning variable to it. I want to use these built strings to be the text displayed as the timechart fields. It would be somethin... See more...
Hi, As part of my search, I'm building some strings with eval and assigning variable to it. I want to use these built strings to be the text displayed as the timechart fields. It would be something like this: <query>index=ecevt2 source=128_distribution | eval bucket1_start=round(min_balance,0) | eval bucket1_end=round(min_balance+range) | eval bucket1=tostring(bucket1_start). "-" .tostring(bucket1_end) | eval bucket2_start=round(bucket1_end,0) | eval bucket2_end=round(bucket1_end+range) | eval bucket2=tostring(bucket2_start). "-" .tostring(bucket2_end) | eval bucket3_start=round(bucket2_end,0) | eval bucket3_end=round(bucket2_end+range) | eval bucket3=tostring(bucket3_start). "-" .tostring(bucket3_end) | eval bucket4_start=round(bucket3_end,0) | eval bucket4_end=round(bucket3_end+range) | eval bucket4=tostring(bucket4_start). "-" .tostring(bucket4_end) | eval bucket5_start=round(bucket4_end,0) | eval bucket5_end=round(bucket4_end+range) | eval bucket5=tostring(bucket5_start). "-" .tostring(bucket5_end) | fields bucket1 | timechart span=3m max(value1) as bucket1, max(value2) as bucket2, max(value3) as bucket3, max(value4) as bucket4, max(value5) as bucket5 So, instead of showing "bucket1" as the text of the field for value1 in timechart, I would like to have the constructed string done with eval (tostring(bucket1_start). "-" .tostring(bucket1_end)). Is there any way to achieve that? Many thanks
Hi. We are about to ingest logs from multiple suppliers, where the individual supplier has full control over their infrastructure. My take was to to create a couple of heavy forwarders and dedic... See more...
Hi. We are about to ingest logs from multiple suppliers, where the individual supplier has full control over their infrastructure. My take was to to create a couple of heavy forwarders and dedicate a port to each supplier: supplier_1 sends data to port 9991 supplier_2 sends data to port 9992 ... This part I think I have working. The next problem is that I have a need to separate the data from supplier_1 from supplier_2, My thought was to create a index per supplier. The problem is then how do I route data received from port 9991 to index_1 regardless of what is configured on the Universal Forwarder, except for Splunk stuff (_internal ...) the different suppliers might use the same source or sourcetype, so it is only the receiving port on the heavy forwarder I might use to separate the data. Any help is much appreciated Kind regards
Hi splunkers! how can i know(/find) the metric logs capture from pipeline? I want to find the heavy forwarder resource load.. i mean if index=_internal source=metrics.log group=*thruput* ... See more...
Hi splunkers! how can i know(/find) the metric logs capture from pipeline? I want to find the heavy forwarder resource load.. i mean if index=_internal source=metrics.log group=*thruput* that is indexing pipeline capture. reference : https://docs.splunk.com/Documentation/Splunk/8.0.2/Troubleshooting/Aboutmetricslog but, index=_internal source=metrics.log group="tcpin_connections" i don't know that. can i know the parsing pipeline capture metric log groups? please help me. ps. I have referenced the site below. https://www.splunk.com/en_us/blog/tips-and-tricks/some-details-on-metrics-log-data-format-utility.html https://www.splunk.com/en_us/blog/tips-and-tricks/forwarder-and-indexer-metrics.html https://wiki.splunk.com/Deploy:Splunk_Metric_Reports https://www.splunk.com/en_us/blog/conf-splunklive/tracking-indexing-status-in-splunkd-log-and-metrics-log.html
Dears, I received an APPINSPECT REPORT indicating that the modular input in the add-on I created have below issues: ImportError: No module named enum ` File: bin/sls_datainput.py Actua... See more...
Dears, I received an APPINSPECT REPORT indicating that the modular input in the add-on I created have below issues: ImportError: No module named enum ` File: bin/sls_datainput.py Actually the module enum is only an Python2 issue, the add-on is fully supporting Python3 (Splunk HF 8.0) and already been configured by setting python.version to python3. in inputs.conf. My question is : If Python2 and 3 compatibility is a criteria of passing AppInspect, or any other suggestions? Thanks!
Hi, I have an issue at a customer where ES is not showing the notables on the incident management page or the security posture page. I have confirmed that the custom correlation searches are enabl... See more...
Hi, I have an issue at a customer where ES is not showing the notables on the incident management page or the security posture page. I have confirmed that the custom correlation searches are enabled, and they are successfully running and creating alerts looking at the "Activity" -> "Alerts" page. I have found that the "Notables" Index is empty over the past 30 days. Would really appreciate some assistance on this topic? as i have looked at all the articles on answers and cannot seem to find the issue.
Team, I am having some windows servers which am able to get windows event logs, perfmons but the custom logs am not able to pull the same. here is my inputs.conf configuration. i have checked the... See more...
Team, I am having some windows servers which am able to get windows event logs, perfmons but the custom logs am not able to pull the same. here is my inputs.conf configuration. i have checked the case sensitivity too as we got one situation wherein correction of case sensitivity didfix the issue. But in this case none is working for me. [monitor:///D:\logfiles\ARBGlobal\SalesCoreApp\SaleCoreApp.log] sourcetype = salescore disabled = 0 ignoreOlderThan=7d Request your help in fixing the issue.
Hi, From where can i get actual search query behind ITSI entity import searches?
Hi Splunkers, I have a concern where splunk says "If you use a .tar file, expand it into the same directory with the same ownership as your existing Splunk Enterprise instance. This overwrites and... See more...
Hi Splunkers, I have a concern where splunk says "If you use a .tar file, expand it into the same directory with the same ownership as your existing Splunk Enterprise instance. This overwrites and replaces matching files but does not remove unique files" Does it mean I'm safe to go without backing up my data? Note: we have data indexes at different location(NOT at default /opt/splunk/var/lib/splunk) except for internal indexes. Also can I upgrade with tar ball on top of rpm(installed previously). Any help is highly appreciated Thanks, Pramodh
I've installed the TA on our Heavy Forwarder, and configured it with the details needed to connect to the Event Hub, as well as the settings for our proxy that it needs to use. Despite this, we'r... See more...
I've installed the TA on our Heavy Forwarder, and configured it with the details needed to connect to the Event Hub, as well as the settings for our proxy that it needs to use. Despite this, we're not seeing any traffic on our proxy from the Heavy Forwarder, despite it appearing to have tried and failed to connect: 2020-03-30 10:37:53,193 ERROR pid=11042 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure_event_hub.py", line 92, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/input_module_azure_event_hub.py", line 112, in collect_events partition_ids = client.get_partition_ids() File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/client.py", line 163, in get_partition_ids return self.get_properties()['partition_ids'] File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/client.py", line 146, in get_properties response = self._management_request(mgmt_msg, op_type=b'com.microsoft:eventhub') File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/client.py", line 127, in _management_request self._handle_exception(exception, retry_count, max_retries) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/client.py", line 105, in _handle_exception _handle_exception(exception, retry_count, max_retries, self) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/azure/eventhub/error.py", line 196, in _handle_exception raise error ConnectError: Unable to open management session. Please confirm URI namespace exists. Unable to open management session. Please confirm URI namespace exists. I've configured the proxy and can see it showing up in ta_ms_aad_settings.conf as: [proxy] proxy_enabled = 1 proxy_port = <proxyport> proxy_url = <proxyurl> And when searching for sourcetype=ta:ms:aad:log, I can see the message: 2020-03-30 10:37:58,475 DEBUG pid=12915 tid=MainThread file=base_modinput.py:log_debug:286 | _Splunk_ Proxy is enabled: <proxyurl>:<proxyport> However when I run tcpdump on the host, I can see it making DNS requests to resolve the host in the EventHub Connection String I provided it, and then making a request directly out to the host - without using the proxy 2020-03-27 15:34:38.750364 IP (tos 0x0, ttl 64, id 26550, offset 0, flags [DF], proto TCP (6), length 60) <SplunkHeavyForwarder> > <AzureEventHub>: Flags [S], cksum 0xf2b6 (incorrect -> 0xbf5a), seq 330547664, win 29200, options [mss 1460,sackOK,TS val 3370165862 ecr 0,nop,wscale 7], length 0 2020-03-27 15:34:39.752354 IP (tos 0x0, ttl 64, id 26551, offset 0, flags [DF], proto TCP (6), length 60) <SplunkHeavyForwarder> > <AzureEventHub>: Flags [S], cksum 0xf2b6 (incorrect -> 0xbb70), seq 330547664, win 29200, options [mss 1460,sackOK,TS val 3370166864 ecr 0,nop,wscale 7], length 0 2020-03-27 15:34:41.757349 IP (tos 0x0, ttl 64, id 26552, offset 0, flags [DF], proto TCP (6), length 60) <SplunkHeavyForwarder> > <AzureEventHub>: Flags [S], cksum 0xf2b6 (incorrect -> 0xb39b), seq 330547664, win 29200, options [mss 1460,sackOK,TS val 3370168869 ecr 0,nop,wscale 7], length 0 2020-03-27 15:34:45.768341 IP (tos 0x0, ttl 64, id 26553, offset 0, flags [DF], proto TCP (6), length 60) <SplunkHeavyForwarder> > <AzureEventHub>: Flags [S], cksum 0xf2b6 (incorrect -> 0xa3f0), seq 330547664, win 29200, options [mss 1460,sackOK,TS val 3370172880 ecr 0,nop,wscale 7], length 0 I'm out of ideas on where this is failing - has anyone had a similar issue, or can you see something I've missed? I don't need to edit ta_ms_aad_settings.conf.spec do I? I assume it's like a template for the ta_ms_aad_settings.conf which has been populated with the proxy config. Currently the spec file is empty: [proxy] proxy_enabled = proxy_type = proxy_url = proxy_port = proxy_username = proxy_password = proxy_rdns = [logging] loglevel =
Using Splunk Enterprise trail license : hi, I have forwarded a file to indexer cluster by installing universal forwarder on host and deploying config files to UF using deployment server. but co... See more...
Using Splunk Enterprise trail license : hi, I have forwarded a file to indexer cluster by installing universal forwarder on host and deploying config files to UF using deployment server. but could find the data on cluster or search head, could someone help me to find out the issue from the below Architecture Overview on GCP (Google Cloud Platform): 1 master node 1 search head 3 indexers 2 universal forwarders 1 deployment server Server.conf on master node : path (opt/splunk/etc/system/local/ [clustering] mode = master replication_factor = 3 search_factor = 2 Deployment server configurations: Created 2 apps under /opt/splunk/etc/deployment-apps/ UF_1 UF_2 configurations for app : UF_1 /opt/splunk/etc/deployment-apps/UF_1/default Inputs.conf [monitor://home/data/Phone_Numbers.csv] disabled=false outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false server = x.x.x.x:9997,y.y.y.y:9997,z.z.z.z:9997 serverclass.conf on deployment server : path /opt/splunk/etc/system/local/ [serverClass:class_1:app:UF_1] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:class_1] whitelist.0 = universalforwarder [serverClass:class_2:app:UF_2] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:class_2] whitelist.0 = universalforwarder-2 1.Universal forwarder configurations inputs.conf [monitor://home/data/Phone_Numbers.csv] disabled=false outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false server = 103.104.86.126:9997,31.238.202.247:9997,31.202.197.21:9997 Verified on Indexer -1, couldn't find any file with timestamp created after data ingestion /opt/splunk/var/lib/splunk/defaultdb/db [root@indexer-1 db]# ls CreationTime GlobalMetaData verified on universal forwarder using command ./splunk list inputstatus home/data/Phone_Numbers.csv file position = 849 file size = 849 percent = 100.00 type = finished reading
Hi All, I managed to store and retrieve data using the following python command. # save checkpoint helper.save_check_point(key, state) # delete checkpoint helper.delete_check_point(key) #... See more...
Hi All, I managed to store and retrieve data using the following python command. # save checkpoint helper.save_check_point(key, state) # delete checkpoint helper.delete_check_point(key) # get checkpoint state = helper.get_check_point(key) I would like to know where the data is stored and how can I check the value on Splunk. I was checking under lookup with the name and source given for the add-on I was working on, but could not find it. Regards, Naresh
I'm trying to find a way to programmatically get the average size of data flowing into each index on a daily basis so we can set indexes.conf max data retention based on how many days we want to hold... See more...
I'm trying to find a way to programmatically get the average size of data flowing into each index on a daily basis so we can set indexes.conf max data retention based on how many days we want to hold for each index. There seem to be a variety of ways to get the size of indexes per day, none of which seem to match. There are a lot of posts on this over the last decade, but reading all of them has just left me more confused. Ideally I can figure out the average amount of MB required on disk (across all indexers in a site) per index. Then I can define how many days I want to retain on a per index basis and set homePath.maxDataSizeMB to avg_per_day_size_mb * number_of_days_to_retain. Ignore cold/frozen for the moment and assume that all hot/warm is going to the same path (e.g. no smartstore). How can I do this? The basic ways I've seen: 1. Use dbinspect. I can filter by state = "hot" or "warm" as well as filter by bucketId to throw away replicated copies of the bucket (either due to replication factor or site replication factor). Given that dbinspect shows you a snapshot in time of buckets, I don't think it accurately shows me the amount of new data coming into the index per day. 2. Use license usage log. When I average this metric out to a per day per index value, it feels like it is fairly close to the amount of data I'm storing across splunk (looking at du values on linux hosts). However, it doesn't count all indexes, notable events, internal logs, etc, which I need to account for. 3. Estimate raw data size for messages (e.g. len(_raw)) over a short period of time and then use those values with tstats over a longer period of time to estimate size. This feels like it will give the closest value, however it feels like a really dirty way to get the data. 4. Use metrics log to look at throughput values. Reading documentation, this just seems to be plain wrong as it stores only samples and doesn't show size of the message after it comes out of the various parsing queues. What are folks doing to reliably size their indexes.conf based on # of days they want to retain per index?
Hi I am new to AppDynamics world, I tried to search this query on community before posting here but couldn't find any clue. Using App Dynamics machine agent I want to monitor all fileystems on a li... See more...
Hi I am new to AppDynamics world, I tried to search this query on community before posting here but couldn't find any clue. Using App Dynamics machine agent I want to monitor all fileystems on a linux servers. If I start machine agent using machine agent user then it will not have permission on all file-system, in that case do I need to start machine agent using root? what is the best practice for this - do people create a dedicated account for this or generally run machine agent using root only? Thanks in advance! 
Hello i have 2 kinds of events - X and Y and i want to see how many times X+Y happens at the same time and how many times each one of them happens alone how can i do it ? thanks **edit... See more...
Hello i have 2 kinds of events - X and Y and i want to see how many times X+Y happens at the same time and how many times each one of them happens alone how can i do it ? thanks **edit: this is the flow : Query a specific eventtype (E1) for a specific tail_id and get all the timestamps in which it appears For each of the above timestamps query the same tail_id at the timestamp +/- a given delta For each query above count how many times different eventtypes appear Return a sum of total amounts of time each of the above events was seen with the original E1 event. E.g. if E1 was seen a total of 100 times have a list that shows E2 was seen all 100 times with E1, E3 was seen 50 times with E1, etc. Do you think it will be possible to run something like this a single splunk query, and moreso will it be efficient to have nested queries and loops in the same command?
So I'm working on a project where i'm ingesting csv files. These file's time stamp can't be read until I pass the: Timestamp format = %Y-%m-%dT%H:%M:%S.%z Timestamp fields = time (which isn't... See more...
So I'm working on a project where i'm ingesting csv files. These file's time stamp can't be read until I pass the: Timestamp format = %Y-%m-%dT%H:%M:%S.%z Timestamp fields = time (which isn't in the csv, but shows up in the "Set Source Type" page of "Add Data" ) Without it I get the errors: - Could not use strptime to parse timestamp from "2020 february tuesday 11 3 2 local 18 2020-02-18T11:03:02.000-0600". - Could not use regex to parse timestamp from "11 3 2 local 18 2020-02-18T1". my problem is that after ingestion i get double fields of every time related field. i.e. date_year = 2020 date_year = 2020 date_month = february date_month = february date_wday = tuesday date_wday = tuesday date_mday = 18 date_mday = 18 date_hour = 11 date_hour = 11 date_minute = 3 date_minute = 3 date_second = 6 date_second = 6 date_zone = local date_zone = local It still shows up in the fields sidebar as just single fields, but when I add them as an interesting field they double in the events page or when I list them using a table command. I've tried also setting Extraction to current time in "Set Source Type" page of "Add Data", but I still get double. *sample of when I table the date fields *sample of what the field dates look like in events
Hi, I would like to know the recommended way to monitor some processes or services inside the OS. Can this be achieved within a regular AppDynamics controller health rules, or do I have to sear... See more...
Hi, I would like to know the recommended way to monitor some processes or services inside the OS. Can this be achieved within a regular AppDynamics controller health rules, or do I have to search through extensions and try to find something in there? For instance, if a service or process gets stopped I would like to have a health rule which would trigger an immediate event, put this on the dashboard etc.
Hello, I get the following error: 2020-03-20 23:04:24,291 ERROR Execution failed ... ..., line 373, in run if last_entry_date_retrieved is not None and last_entry_date_retrieved > last_entr... See more...
Hello, I get the following error: 2020-03-20 23:04:24,291 ERROR Execution failed ... ..., line 373, in run if last_entry_date_retrieved is not None and last_entry_date_retrieved > last_entry_date: TypeError: '>' not supported between instances of 'time.struct_time' and 'NoneType' In syndication.py line 343 last_entry_date can be set to None. I suppose this is happening here and later in the code the comparison fails. Thanks in Advance
I have strange issue, I am receiving logs in CEF format from fireeye under index=fireeye. On search Head I am seeing fields being properly extracted under CEF format but on ES app, it is not showin... See more...
I have strange issue, I am receiving logs in CEF format from fireeye under index=fireeye. On search Head I am seeing fields being properly extracted under CEF format but on ES app, it is not showing as on search head. on both ends I have same type of packages installed. Is ES app stops CEF format field extraction ?
Hi guys, I was wondering if someone could point me in the right direction with an issue I've been having. Basically I have a search thats set to pull usernames from a KV lookup, the issue is ... See more...
Hi guys, I was wondering if someone could point me in the right direction with an issue I've been having. Basically I have a search thats set to pull usernames from a KV lookup, the issue is theres "duplicate" entries in the lookup e.g. Username and Username@email Is there any way to stop the duplicate users from appearing in the search results? I seem to be at a bit of a dead end. Thanks