All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi We are suffering form High CPU on a one box set up of Splunk (about 54 cores index and search head all in one). As the issue is complex, i want to know will more hardware help or do we need to d... See more...
Hi We are suffering form High CPU on a one box set up of Splunk (about 54 cores index and search head all in one). As the issue is complex, i want to know will more hardware help or do we need to do other changes? We have build a system where we have saved searches that call saved search (Like Java function). We have done this as we use the same core function for alerts and dashboards so we have the same code line. However this seems to have the impact of increasing the number of jobs dispatch (splunk/var/run/splunk/dispatch). For example if you call one saved search you are putting 2 jobs into dispatch directory. If you call a lookuptable that is also another jobb etc.. before we know it we have 5,000 jobs! This causes high CPU We have put in place scripts to remove these jobs(Non Running jobs) per minutes every minute but still it grows very fast during busy periods, this has an impact of having high CPU. The graph shows the CPU on top and the number of directories in the splunk/var/run/splunk/dispatch. We can see they are closely correlated. To try and fight this we have developed the below script that will remove the files.However during busy periods it is not enough #!/bin/bash dispatch=/hp737srv2/apps/splunk/var/run/splunk/dispatch splunkdir=/hp737srv2/apps/splunk find $dispatch -maxdepth 1 -mmin +3 2>/dev/null | while read job; do if [ ! -e "$job/save" ] ; then rm -rfv $job ; fi ; done find $dispatch -type d -empty -name alive.token -mmin +3 2>/dev/null | xargs -i rm -Rf {} find $splunkdir/var/run/splunk/ -type f -name "session-*" -mmin +3 2>/dev/null | xargs -i rm -Rf {} We can see form the running jobs that we have a lot of blank jobs - i think they are the lookuptables etc.. But they are also taking a lot of jobs - So i am looking for help in knowing, if i add more hardware will this help, i think not unless it is a search-head as the jobs will still be on the search heads. OR will adding more index and making a cluster help this issue? OR is there a setting i can add to make the one search that call 2 saved search search and 2 lookup file not use 5 searches in the dispatch directory. OR is something else going on that i need to look at! Thanks in advance Robbie
Hi Team, I want to do a field extraction during the search time itself so i want the following fields to be extracted from the below logs. Jan 8 12:52:29 abc notice def[xxxx]: xxxxxxxx:x: Pool... See more...
Hi Team, I want to do a field extraction during the search time itself so i want the following fields to be extracted from the below logs. Jan 8 12:52:29 abc notice def[xxxx]: xxxxxxxx:x: Pool /Common/xyz.abc.com443 member /Common/hostinfo_portal:443 session status forced disabled. Jan 8 10:44:23 abc notice def[xxxx]: xxxxxxxx:x: Pool /Common/xyz.abc.com443 member /Common/hostinfo_portal:443 monitor status up. [ /Common/https_xxxx: up ] [ was down for xxhrs:Xxmins:XXsec ] Jan 8 10:44:22 abc notice def[xxxx]: xxxxxxxx:x: Pool /Common/xyz.abc.com443 member /Common/hostinfo_portal:443 session status enabled. Jan 8 10:30:42 abc notice def[xxxx]: xxxxxxxx:x: Pool /Common/xyz.abc.com443 member /Common/hostinfo_portal:443 monitor status forced down. [ /Common/https_xxxx: up ] [ was forced down for Xxhrs:Xxmins:Xsec ] Jan 8 10:24:21 abc notice def[xxxx]: xxxxxxxx:x: Pool /Common/xyz.abc.com443 member /Common/hostinfo_portal:443 session status forced disabled. xyz.abc.com as "hostname" hostinfo as "client" The below information as "remarks" session status forced disabled monitor status up session status enabled monitor status forced down session status forced disabled So how to do a field extraction in the search time itself if yes can you kidnly help with the query?
Hi Recently we have ingested Microsoft Azure data by configuring Microsoft Azure Add-on for splunk and we could see the data getting ingested, now we need to know what kind of dashboard or reports ca... See more...
Hi Recently we have ingested Microsoft Azure data by configuring Microsoft Azure Add-on for splunk and we could see the data getting ingested, now we need to know what kind of dashboard or reports can be derived from these inputs/data in Splunk. Whether Microsoft Azure template for splunk can be configured to get the default report /dashboard as per the splunk base document, it seems the data gathered using the Splunk Add-on for Microsoft Cloud Services can be utilized to get the visualizations, reports, and searches. In our case we have used the Microsoft Azure add -on for Splunk 2.0.2 version to fetch the data from Azure cloud environment.
Hi all, I have created an alert with this simple query: index=foo host="bar" action=fail | stats count by user | search count>40 It is scheduled every hour and the trigger setting is Num... See more...
Hi all, I have created an alert with this simple query: index=foo host="bar" action=fail | stats count by user | search count>40 It is scheduled every hour and the trigger setting is Number of Results greater than 0 I have tried adding table and fields commands but it still doesn't work Why could this happen?
Hi All, We have deployed Splunk Cloud in our environment with version 7.2.9.1 and currently we have planned to upgrade to 8.0 and above. So we have submitted a case for upgrade our Splunk to 8.0... See more...
Hi All, We have deployed Splunk Cloud in our environment with version 7.2.9.1 and currently we have planned to upgrade to 8.0 and above. So we have submitted a case for upgrade our Splunk to 8.0 and above at that time we came to know that the following add-on in ES-Search head is not compatible with Splunk Cloud 8.0 and above. Add-On Information: TA-QualysCloudPlatform version 1.4.0 https://splunkbase.splunk.com/app/2964/ So we have installed & configured the Add-on in Heavy Forwarder server as well as in ES-Search head. But i got the information as per the document stating that the Add-on is required in Search head as well. But completely not sure whether it is really required or not. https://www.qualys.com/docs/qualys-ta-for-splunk.pdf If its not required then we would go and uninstall the Add-on from ES- Search head then we can upgrade our Core Splunk to 8.0 and above. So kindly help to check and update. So how come can i overcome this issue and proceed with core upgrade to 8.0 and above.
i have a Hourly activity Dashboard, and i want to generate a report of these hourly activity in pdf format. please some one can suggest on this.
By default, UFs are sending chunks of 64kB data and spread these over multiple indexers. But indexers are supposed to reassemble these chunks so that they can break lines, delimit events and extract ... See more...
By default, UFs are sending chunks of 64kB data and spread these over multiple indexers. But indexers are supposed to reassemble these chunks so that they can break lines, delimit events and extract timestamps. I don't see clearly in the documentation how this process is working. Suppose we have 3 events. End of event1 + beginning of event2 are sent in a first chunk (chunk1); end of event2 and beginning of event3 are sent in a second chunk (chunk2). When indexers apply linebreaking rules, each of them ends up with a part of event2. They know about it because they have a leftover (partially delimited event) but what happens next ? Also how is LINE_BREAKER_LOOKBEHIND in props.conf working exactly ? Why do you need to go back on the previous chunk of data for a certain number of bytes ?
i want to compare if last 5 digits of user ID are same don't show in result how it can be done 0012345 abc0012345 xyx\0012345 if the resulting values are above as a user ID, i want to che... See more...
i want to compare if last 5 digits of user ID are same don't show in result how it can be done 0012345 abc0012345 xyx\0012345 if the resulting values are above as a user ID, i want to check if last 5 values(12345) are same so it should not trigger in my search as a result of user ID
I am trying to execute the following command to restore TSIDX. splunk rebuild "bucket directory" What is correct to specify for "bucket directory"? Some start with "db_" or "rb_" and some ... See more...
I am trying to execute the following command to restore TSIDX. splunk rebuild "bucket directory" What is correct to specify for "bucket directory"? Some start with "db_" or "rb_" and some end with ".rbsentinel". $ sudo -u splunk ls /opt/splunk/var/lib/splunk/someindex/db CreationTime GlobalMetaData db_1575255638_1550733795_1_2EB5DD1B-EBFC-4678-A599-3C90C8E80123 db_1575255638_1550733795_1_2EB5DD1B-EBFC-4678-A599-3C90C8E80123.rbsentinel db_1575255638_1550733795_4_2EB5DD1B-EBFC-4678-A599-3C90C8E80123 db_1575255638_1550733795_4_2EB5DD1B-EBFC-4678-A599-3C90C8E80123.rbsentinel db_1583828795_1575944480_0_2EB5DD1B-EBFC-4678-A599-3C90C8E80123 db_1583828795_1575944480_0_2EB5DD1B-EBFC-4678-A599-3C90C8E80123.rbsentinel db_1585128403_1583828817_2_2EB5DD1B-EBFC-4678-A599-3C90C8E80123 db_1585128403_1583828817_2_2EB5DD1B-EBFC-4678-A599-3C90C8E80123.rbsentinel db_1585209454_1575944480_3_2EB5DD1B-EBFC-4678-A599-3C90C8E80123 db_1585209454_1575944480_3_2EB5DD1B-EBFC-4678-A599-3C90C8E80123.rbsentinel db_1585894857_1585209525_5_2EB5DD1B-EBFC-4678-A599-3C90C8E80123 db_1585894857_1585209525_5_2EB5DD1B-EBFC-4678-A599-3C90C8E80123.rbsentinel db_1586917200_1580354616_6_2EB5DD1B-EBFC-4678-A599-3C90C8E80123 db_1586917200_1580354616_6_2EB5DD1B-EBFC-4678-A599-3C90C8E80123.rbsentinel db_1586918400_1586916300_7_2EB5DD1B-EBFC-4678-A599-3C90C8E80123 rb_1561972997_1550733795_1_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5 rb_1561972997_1550733795_1_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5.rbsentinel rb_1561972997_1550733795_4_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5 rb_1561972997_1550733795_4_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5.rbsentinel rb_1575255638_1550733795_7_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5 rb_1583828450_1564380558_0_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5 rb_1583828450_1564380558_0_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5.rbsentinel rb_1585128180_1583828597_2_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5 rb_1585128180_1583828597_2_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5.rbsentinel rb_1585295925_1564380558_3_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5 rb_1585295925_1564380558_3_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5.rbsentinel rb_1585894871_1585295926_5_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5 rb_1585894871_1585295926_5_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5.rbsentinel rb_1585901346_1575944480_6_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5 rb_1586917500_1585901374_8_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5 rb_1586918400_1586917500_9_C03F81F1-D923-458D-B4BE-0D5C6DF1EBC5
For my logs with IP and Vulnerability ID (VID), I have few duplicate values. Which I can easily remove with "dedup IP, VID". As this will only show single value in logs for an IP+VID combination. ... See more...
For my logs with IP and Vulnerability ID (VID), I have few duplicate values. Which I can easily remove with "dedup IP, VID". As this will only show single value in logs for an IP+VID combination. But with timechart over 1 month it doesn't work, as if I dedup before timechart, it removes duplicate values and doesn't show exact results for every week. I need dedup to run for every week separately under timechart to give correct results. Currently running: My main search.... | dedup IP, VID | timechart span=w@1w count Results what I get with incorrect count: _time ** ** count 2020-03-17 2224 2020-03-17 218 2020-03-17 689 2020-03-17 1432 2020-03-17 666 But actually if "dedup IP, VID" works separately for each week, then each week's result should be around 2000. Thanks in advance.
We need to monitor multiple dynamic queues, queues are generated and removed. I have tried using "jms://queue/dynamicQueues/AVQ." and "jms://queue/dynamicQueues/AVQ.>" as indicated on the activemq do... See more...
We need to monitor multiple dynamic queues, queues are generated and removed. I have tried using "jms://queue/dynamicQueues/AVQ." and "jms://queue/dynamicQueues/AVQ.>" as indicated on the activemq documentation page. I get results like name="queue_browse" queue_name="AVQ." queue_length="0" Is there a way to list all the queues by name?
hi i'm copy log of my application to splunk server with script (i don't use forwarder here) now problem is log send to splunk server with 1 day delay! for example yesterday log rich to server... See more...
hi i'm copy log of my application to splunk server with script (i don't use forwarder here) now problem is log send to splunk server with 1 day delay! for example yesterday log rich to server today! (real date of logs only exist in name of log file, even in each line of log only exist time not date) as you know splunk consider date of import log file as date of events! now how can I force splunk to use date that exist in log file name? FYI: file extension are bz2. thanks.
hi i had made a stats table base on below command and under my visualization is : i would like to ask if there is a way to have additional information at the X-axis example the curre... See more...
hi i had made a stats table base on below command and under my visualization is : i would like to ask if there is a way to have additional information at the X-axis example the current X axis have the DESCRIPTION and count(VALUE), but i would like to have the VALUE (RUN and STOP) information inside too
Hi, after installing DB Connect and configuring it, I now have Java listening on all interfaces (Port 9999, 1090). How can this be restricted to localhost? thx afx
I have a log that contains multiple time fields _time (ingest time) Processed time (processed_time) Actioned time (actioned_time) Result time (result_time) _time or ingest time is con... See more...
I have a log that contains multiple time fields _time (ingest time) Processed time (processed_time) Actioned time (actioned_time) Result time (result_time) _time or ingest time is configured in props to adjust the timezone (due to no offset in the original log) I need for my timezone so its working fine. However the rest of the fields are just static fields. I went through doing the following for processed time (an example time stamp is Apr 10 2020 05:45:52) So I wrote the following SPL to convert the static field "processed_time" to epoch index=foo | eval epoch_time(strptime(processed_time, "%b %d %Y %H:%M:%S") | eval processed_time_normalized=strftime(epoch_time, "%b-%d-%Y %H:%M:%S" What I would like to do is add time to this event. So if I wanted to add 2, 4, 9 hours to this field how would I do that? I tried doing | eval processed_time_normalized=strftime(epoch_time, "%b-%d-%Y %H:%M:%S" %:::z +8) and | eval processed_time_normalized=strftime(epoch_time, "%b-%d-%Y %H:%M:%S" %Z) but all this does is set the offset to +8 in this example or the timezone I am in with %Z. I need this time (processed_time) as well as actioned_time and result_time to show me in this example, 8 hours later. What I also want to know is how do I then put this into something like props or transforms so I don't have to do this via SPL?
How to capture only the words "successfully sent using abc.def.com" before indexing in splunk from the below log file "series","number","Date","Time","current","Message" "Info","0","07/20/14"... See more...
How to capture only the words "successfully sent using abc.def.com" before indexing in splunk from the below log file "series","number","Date","Time","current","Message" "Info","0","07/20/14","07:09:03",,"draft: 'REQUEST REQUIRED' From:'abc@mail.com' To:'123@mail.com' was successfully sent using abc.def.com" "Info","0","07/20/14","07:09:03",,"draft: 'REQUEST REQUIRED' From:'abc@mail.com' To:'123@mail.com' was successfully sent using abc.def.com" "Info","0","07/20/14","07:09:03",,"draft: 'REQUEST REQUIRED' From:'abc@mail.com' To:'123@mail.com' was successfully sent using abc.def.com" "Info","0","08/16/16","07:45:03",,"draft: 'REQUEST REQUIRED' From:'abc@mail.com' To:'123@mail.com' was successfully sent using abc.def.com" what is required props.conf and where to place it? thanks in advance:)
Recently installed Virustotal app on my splunk https://splunkbase.splunk.com/app/4283/ COmpleted initial app setup with VT token When i come back to search and execute | virustotal command i recei... See more...
Recently installed Virustotal app on my splunk https://splunkbase.splunk.com/app/4283/ COmpleted initial app setup with VT token When i come back to search and execute | virustotal command i receive below error "VirusTotal Command: No field specified for matching. Specify one of 'hash=', 'ip=', 'url=', or 'domain=' and try again." I modify my search query as | virustotal ip="8.8.8.8" received error Illegal value: ip=8.8.8.8 Some background information - Version of VirusTotal TA you're using - 2.0.0 - Whether the Splunk instance you installed it on is Splunk Cloud or on-premises- on-prem - Version of Splunk - 7.3.4 - Type of Splunk instance (e.g. Search Head, Indexer, Heavy Forwarder, All-In-One) - Search Head - Does your environment require a proxy to call out to the internet - Yes Could some advice how this can be resolved ?
リモートワークがフォーカスされてきており、オペレーションセンターに勤務ができない状況が続いております。このため、今までアラームをパトランプでセンター側で鳴らしていたのですが、自宅でオペレーションすることになり、自宅側でもアラームを認識したいと思っています。メールではできることは認識していますが、サーチ結果に対して音を鳴らしてアラームに反応したいと思っております。具体的にどのようなダッシュボードに... See more...
リモートワークがフォーカスされてきており、オペレーションセンターに勤務ができない状況が続いております。このため、今までアラームをパトランプでセンター側で鳴らしていたのですが、自宅でオペレーションすることになり、自宅側でもアラームを認識したいと思っています。メールではできることは認識していますが、サーチ結果に対して音を鳴らしてアラームに反応したいと思っております。具体的にどのようなダッシュボードにすればよいかガイド頂けますでしょうか。
I am trying to add external database connection to be used in a lookup, however when I tried to add the database connection it gives me error below: Encountered the following error while trying t... See more...
I am trying to add external database connection to be used in a lookup, however when I tried to add the database connection it gives me error below: Encountered the following error while trying to update: In handler 'databases': Unexpected error "<class 'spp.java.bridge.JavaBridgeError'>" from python handler: "Command com.splunk.bridge.cmd.Reload returned status code 1". See splunkd.log for more details. Can someone please help me, I am new to Splunk and never really touch the system setting
Hello, I'm new with Splunk and still exploring how to use it. I was able to successfully create a Splunk Enterprise and Splunk Universal on two separate linux virtual machines. Now, my goal is to cre... See more...
Hello, I'm new with Splunk and still exploring how to use it. I was able to successfully create a Splunk Enterprise and Splunk Universal on two separate linux virtual machines. Now, my goal is to create monitoring metrics for cpu usage, etc. I've installed an App for Infrastructure and an add-on for infrastructure in the Splunk Enterprise VM. When adding entities, I can't install the generated linux command since I have restrictions for firewalls and kaspersky and etc. so I just followed this: https://answers.splunk.com/answers/706010/in-the-splunk-app-for-infrastructure-can-you-use-e.html. Instead of doing the windows version guide, I followed the one in Linux (https://docs.splunk.com/Documentation/InfraApp/1.2.2/Admin/ManageAgents. I've also added an inputs.conf and outputs.conf in my etc/apps/search/local of my splunk forwarder directory. Although when I restart my UF, there are still no entities in my Splunk Enterprise App. Can you help me with this? Thank you in advance! Inputs.conf [perfmon://CPU Load] counters = % C1 Time;% C2 Time;% Idle Time;% Processor Time;% User Time;% Privileged Time;% Reserved Time;% Interrupt Time instances = * interval = 30 object = Processor index = em_metrics _meta = os::"Linux" [perfmon://Physical Disk] counters = % Disk Read Time;% Disk Write Time instances = * interval = 30 object = PhysicalDisk index = em_metrics _meta = os::"Linux" [perfmon://Network Interface] counters = Bytes Received/sec;Bytes Sent/sec;Packets Received/sec;Packets Sent/sec;Packets Received Errors;Packets Outbound Errors instances = * interval = 30 object = Network Interface index = em_metrics _meta = os::"Linux" [perfmon://Available Memory] counters = Cache Bytes;% Committed Bytes In Use;Page Reads/sec;Pages Input/sec;Pages Output/sec;Committed Bytes;Available Bytes interval = 30 object = Memory index = em_metrics _meta = os::"Linux" [perfmon://System] counters = Processor Queue Length;Threads instances = * interval = 30 object = System index = em_metrics _meta = os::"Linux" [perfmon://Process] counters = % Processor Time;% User Time;% Privileged Time instances = * interval = 30 object = Process index = em_metrics _meta = os::"Linux" [perfmon://Free Disk Space] counters = Free Megabytes;% Free Space instances = * interval = 30 object = LogicalDisk index = em_metrics _meta = os::"Linux" monitor:///var/log/syslog] disabled = false sourcetype = syslog [monitor:///var/log/daemon.log] disabled = false sourcetype = syslog [monitor:///var/log/auth.log] disabled = false sourcetype = syslog [monitor:///var/log/apache/access.log] disabled = false sourcetype = combined_access [monitor:///var/log/apache/error.log] disabled = false sourcetype = combined_access [monitor:///opt/splunkforwarder/var/log/splunk/*.log] disabled = false index = _internal [monitor:///etc/collectd/collectd.log] disabled = false index = _internal Outputs.conf [tcpout] defaultGroup = splunk-app-infra-autolb-group [tcpout:splunk-app-infra-autolb-group] disabled = false server = 192.168.56.110:9997 collectd.conf # # Config file for collectd(1). # Please read collectd.conf(5) for a list of options. # http://collectd.org/ # ############################################################################## # Global # #----------------------------------------------------------------------------# # Global settings for the daemon. # ############################################################################## Hostname "192.168.56.109" #FQDNLookup true #BaseDir "/var/lib/collectd" #PIDFile "/var/run/collectd.pid" #PluginDir "/usr/lib64/collectd" #TypesDB "/usr/share/collectd/types.db" #----------------------------------------------------------------------------# # When enabled, plugins are loaded automatically with the default options # # when an appropriate <Plugin ...> block is encountered. # # Disabled by default. # #----------------------------------------------------------------------------# #AutoLoadPlugin false #----------------------------------------------------------------------------# # When enabled, internal statistics are collected, using "collectd" as the # # plugin name. # # Disabled by default. # #----------------------------------------------------------------------------# #CollectInternalStats false #----------------------------------------------------------------------------# # Interval at which to query values. This may be overwritten on a per-plugin # # base by using the 'Interval' option of the LoadPlugin block: # # <LoadPlugin foo> # # Interval 60 # # </LoadPlugin> # #----------------------------------------------------------------------------# Interval 60 #MaxReadInterval 86400 #Timeout 2 #ReadThreads 5 #WriteThreads 5 # Limit the size of the write queue. Default is no limit. Setting up a limit is # recommended for servers handling a high volume of traffic. #WriteQueueLimitHigh 1000000 #WriteQueueLimitLow 800000 ############################################################################## # Logging # #----------------------------------------------------------------------------# # Plugins which provide logging functions should be loaded first, so log # # messages generated when loading or configuring other plugins can be # # accessed. # ############################################################################## LoadPlugin syslog LoadPlugin logfile <LoadPlugin "write_splunk"> FlushInterval 10 </LoadPlugin> ############################################################################## # LoadPlugin section # #----------------------------------------------------------------------------# # Lines beginning with a single `#' belong to plugins which have been built # # but are disabled by default. # # # # Lines beginning with `##' belong to plugins which have not been built due # # to missing dependencies or because they have been deactivated explicitly. # ############################################################################## #LoadPlugin csv LoadPlugin cpu LoadPlugin memory LoadPlugin df LoadPlugin load LoadPlugin disk LoadPlugin interface ############################################################################## # Plugin configuration # #----------------------------------------------------------------------------# # In this section configuration stubs for each plugin are provided. A desc- # # ription of those options is available in the collectd.conf(5) manual page. # ############################################################################## <Plugin logfile> LogLevel info File "/etc/collectd/collectd.log" Timestamp true PrintSeverity true </Plugin> <Plugin syslog> LogLevel info </Plugin> <Plugin cpu> ReportByCpu false ReportByState true ValuesPercentage true </Plugin> <Plugin memory> ValuesAbsolute false ValuesPercentage true </Plugin> <Plugin df> FSType "ext2" FSType "ext3" FSType "ext4" FSType "XFS" FSType "rootfs" FSType "overlay" FSType "hfs" FSType "apfs" FSType "zfs" FSType "ufs" ReportByDevice true ValuesAbsolute false ValuesPercentage true IgnoreSelected false </Plugin> <Plugin load> ReportRelative true </Plugin> <Plugin disk> Disk "" IgnoreSelected true UdevNameAttr "DEVNAME" </Plugin> <Plugin interface> IgnoreSelected true </Plugin> <Plugin write_splunk> server "192.168.56.110" port "8088" token "SomeGUIDToken" ssl true verifyssl false owner:admin </Plugin> #Update Hostname, <HEC SERVER> & <splunk app server> in collectd.conf file above. Also, you can add dimensions as <Dimension "key:value"> to write_splunk plugin (optional)"