All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My question is can it be used with IPv6?  If so, how?   I like the tool kit its simple and simple is great for the guys who walk in the lab and ask questions.  Thanks for all your hard work.
Hello, Whenever I try to install a new version or uninstall the current version on my system, I get an error message that the install failed, and the account already exists. Any ideas?
Hi all, I was wondering, with the following table would I be able to create a set of tiles that would be color coded based on the status field and also visualise also show the application in large f... See more...
Hi all, I was wondering, with the following table would I be able to create a set of tiles that would be color coded based on the status field and also visualise also show the application in large font: Environment Application Hostname Status EUAT MC H1 RUNNING EUAT MC H2 DOWN DEV IC H4 ERROR UAT IC HK RUNNING   I was hoping that "RUNNING" would be green, "DOWN" be red and "ERROR" be orange.    Any assistance would be greatly appreciated!
Hi all, I'm trying to convert the message body of my events into fields.  The structure of the event message is in a comma delimeted key-value pair format. An example of the structure is: Time ... See more...
Hi all, I'm trying to convert the message body of my events into fields.  The structure of the event message is in a comma delimeted key-value pair format. An example of the structure is: Time Event 10/08/2021 15:09:49.000 Timestamp,10/08/2021 15:09:49,Environment,EUAT,Artefact,ICE,Application,ICE,Domain,ws,Status,RUNNING 10/08/2021 15:09:49.000 Timestamp,10/08/2021 15:09:49,Environment,EUAT,Artefact,ICE,Application,Radiating Whitespaced App,Domain,dc,Status,ERROR 10/08/2021 15:09:49.000 Timestamp,10/08/2021 15:09:49,Environment,DEV,Artefact,MC,Application,MCIO,AppID,4,Hostname,4569erg,Domain,wsdc,Status,STOPPED   Is there a way, through a search query to make every odd value a 'field' and every even value a corresponding 'value' for that field. Therefore, 'Timestamp' would be a field, with it's corresponding value, then 'Environment' would be the next field. The tricky part is that there can be varying lengths of key-value pair strings in the events. For instance, the first row has 6 pairs of key-value pairs, whereas the third row has 8.  Any help would be greatly appreciated!
Hi All, I'm using the default windows addon and fetching the %idletime for physicaldisk, but except for C: drives it shows 0% for all others but when I login to the server and check the graphs it's ... See more...
Hi All, I'm using the default windows addon and fetching the %idletime for physicaldisk, but except for C: drives it shows 0% for all others but when I login to the server and check the graphs it's always around 100% and until the performance monitor is open Splunk will also show me the same data (100 % idle time ). Kindly help me out with this.   Stanza I'm using in inputs.conf  [perfmon://PhysicalDisk] object = PhysicalDisk counters = Disk Transfers/sec; % Disk Time; % Idle Time; Avg. Disk sec/Write; Avg. Disk sec/Read disabled = 0 index=perfmon instances = * interval = 300 mode = multikv useEnglishOnly=true  
I have a few lookups created by users they left the organization. We need to remove this lookups since it take large amount of space. Before we remove is there any query we can find out  how many sea... See more...
I have a few lookups created by users they left the organization. We need to remove this lookups since it take large amount of space. Before we remove is there any query we can find out  how many searches using this lookup.
Hello, I would like to enter the info from a lookup table into my dashboard search.   lookup table name: FIP.csv content: field1;field2 160;43 180;50   I tried this: | inputlookup FIP.csv ... See more...
Hello, I would like to enter the info from a lookup table into my dashboard search.   lookup table name: FIP.csv content: field1;field2 160;43 180;50   I tried this: | inputlookup FIP.csv and then several add-ons like | lookup FIP field1 OUTPUTNEW field2 but nothing works.   What is the correct syntax?
Hi all, I have created a lookup in HF ( taking batch inputs from dbconnect into a lookup) But i am unable to access the lookup on the Search Head. Please help.
Hello I have this query:     sourcetype="billinglog" "Reported to MonitorProcessing successfully"| spath "AdditionalData.EventData.MetricName" | search "AdditionalData.EventData.MetricName"=Depos... See more...
Hello I have this query:     sourcetype="billinglog" "Reported to MonitorProcessing successfully"| spath "AdditionalData.EventData.MetricName" | search "AdditionalData.EventData.MetricName"=DepositV2 | rename AdditionalData.EventData.monitorProcessingDto.Country as Country | search AdditionalData.EventData.monitorProcessingDto.FTD="*" | stats count(AdditionalData.EventData.monitorProcessingDto.FTD=Yes) AS FTDyes | table FTDyes     FTDyes returns as 0 while if im changing    AdditionalData.EventData.monitorProcessingDto.FTD="*"     to:   AdditionalData.EventData.monitorProcessingDto.FTD="yes"   i'm getting result 12 what am i missing ? thanks  
Hi guys I'm having a problem my drilldown menu for the panels on my dashboard is disappearing. when I render the dashboard its there but when i try to click it it vanishes.   there are no ... See more...
Hi guys I'm having a problem my drilldown menu for the panels on my dashboard is disappearing. when I render the dashboard its there but when i try to click it it vanishes.   there are no scripts or custom visualizations on the dashboard so I don't understand why this happens. the permissions are also global for the dashboard
I'm trying to display the cumulative sum in the timechart. two sourcetypes  index= _internal | [search sourcetype=source1 clu=* value=* | rename value as source1value] | appendcols [search source... See more...
I'm trying to display the cumulative sum in the timechart. two sourcetypes  index= _internal | [search sourcetype=source1 clu=* value=* | rename value as source1value] | appendcols [search sourcetype=source2 clu=* value=* | rename value as source2value] | table source1value source2value | eval res=source2value-source1value | stats sum(res) up to here giving the sum of res, I need to display this cumulative sum in the time chart. Can anyone suggest how I can achieve this?  
Hello, I have the bellow search: index=test sourcetype=Test |stats count by _time |eventstats perc99(count) as p99 |eval Percentile  = case(count >= p99, “99%”) |stats count by transactions by ... See more...
Hello, I have the bellow search: index=test sourcetype=Test |stats count by _time |eventstats perc99(count) as p99 |eval Percentile  = case(count >= p99, “99%”) |stats count by transactions by percentile  I want to add a column that shows the % of transactions  in the 99% percentile however can’t work out how to do this. Any advice would be greatly appreciated.   Thanks   Joe  
Hello I have a lookup, which contains hostnames, how can I make search over indexes (for example index=*) only by hostnames which in lookup? thank you.
I am collecting data which tests that a server can reach other destinations. This data is collected in the form of source, destination, application name, description and state - did it connect or not... See more...
I am collecting data which tests that a server can reach other destinations. This data is collected in the form of source, destination, application name, description and state - did it connect or not. I would like to use this data in a ITSI Service's KPI in to measure the state of connections from applications to other servers. However in ITSI it seems only possibly to split by one value not two, but I read that it's possibly to create new field combining the two fields together to get around this. I've tried this but I'm getting some unexpected results: I created a base search which creates a new combined field called 'host_application': index=main task=checkopenports | eval host_application = host . "_" . application I added a new entity to the service called host_application and set it to match host123_websiteservice (which is what I want to pick out of the results to match for this service: the server 'host123' and the application 'websiteservice') In the KPI configuration I configured 'Entity Split Field' to host_application and also set 'Entity Filter Field' to host_application, and then set the calculation to 'count' I'm hoping this should split the results by the host_application field and then only show items matching the host_application field set in the entities tab but instead it shows a value of 0 and no entities. Any suggestions as to what I'm doing wrong here? Thanks Eddie
Hi All,  Looking for some help on a new cloud instance we have and to understand it a bit better. 5gb per day. I have a Non IT Sec Background. I have checked the CMC dashboard overview and am sho... See more...
Hi All,  Looking for some help on a new cloud instance we have and to understand it a bit better. 5gb per day. I have a Non IT Sec Background. I have checked the CMC dashboard overview and am showing a 0GB ingest Volume.  Throughput by index is showing values with a max of 11.597kb (last 24 hours) When i go to the CMC dash / License Usage / Ingest - everything is blank and no figures showing ( but search is working) When i go to the CMC dash / Workload - i can see SVC usage at its highest at 5.418 Current Searchable index storage at 16gb _Internal = 11 _metrics = 4 _introspection = 1 Data Ingested Graph = "no results found"  I have so far ingested a small 8kb excel csv file to a lookup table and created a small inputlookup dashboard from here with a few graphs and charts.  I have not set up any UF or any API feeds etc.  Is the CMC not showing my daily usage quota as my files i have imported are so tiny ?  if so is there a way to show the kb/mb and not GBs at this stage?  or a way to show a simple bar graph of daily License 5 g and what total of this i have used per day ? ( i believe i used to have this simple to read info under licence when i had the enterprise, but do not seem to get the same results on cloud)  Thank You All. 
What's the best way to create a base search that will be generic/portable across all clients  that will look over a variable period of 45days to identify which hosts have stop sending logs to Splunk?... See more...
What's the best way to create a base search that will be generic/portable across all clients  that will look over a variable period of 45days to identify which hosts have stop sending logs to Splunk? I want to scale it out for about 20,ooo apps. I want to keep the same SPL and use lookups and macros to filter the requirements for each client but, will not be in the base search.
I've a query which has column like AccountNO eventType _time and difference I'm trying to find the time difference of each eventType(there are 13 eventTypes), I'm following an algorithm and able to... See more...
I've a query which has column like AccountNO eventType _time and difference I'm trying to find the time difference of each eventType(there are 13 eventTypes), I'm following an algorithm and able to get the time difference of these 13 event types. Now my result looks like this AccountNO         eventType                _time                                        difference 123456789       eventType1            1/1/2021:12:00:00                                 eventType2            1/1/2021:12:01:20                                 eventType3            1/1/2021:12:03:00                                 eventType4            1/1/2021:12:04:00                                  eventType5            1/1/2021:12:08:00                                  eventType6            1/1/2021:12:12:00                                   eventType7            1/1/2021:12:13:00                                  eventType8              1/1/2021:12:14:50                                  eventType9              1/1/2021:12:16:00                                 eventType10             1/1/2021:12:18:00                                  eventType11            1/1/2021:12:19:00                                  eventType12             1/1/2021:12:21:30                                   eventType13             1/1/2021:12:23:00 I used eval and formula to get the difference of 13 eventTypes like D1,D2,D3,D4,D5,D6,D7,D8,D9,D10,D11,D12,D13 Now I want to map these D1 to D13 values in difference field/column. So that my result will be like below. I guess it has something to do with CASE Statement but it's not working for me. Please help AccountNO            eventType                  _time                                                 difference 123456789            eventType1              1/1/2021:12:00:00                         00:00                                      eventType2              1/1/2021:12:01:20                         01:20                                      eventType3                1/1/2021:12:03:00                       01:40                                      eventType4                1/1/2021:12:04:00                       01:00                                      eventType5                 1/1/2021:12:08:00                      07:00                                      eventType6                  1/1/2021:12:12:00                      02:00                                      eventType7                   1/1/2021:12:13:00                      03:20                                       eventType8                   1/1/2021:12:14:50                    02:00                                       eventType9                     1/1/2021:12:16:00                    01:00                                       eventType10                    1/1/2021:12:18:00                   02:00                                      eventType11                    1/1/2021:12:19:00                    01:00                                       eventType12                     1/1/2021:12:21:30                   02:00                                       eventType13                     1/1/2021:12:23:00                     04:00
Hi all, I have a dashboard which is comprised of 5 tables. However, sometimes it can get annoying scrolling all the way down. Is there a way that on the top of the dashboard I can have 5 hyperlinks ... See more...
Hi all, I have a dashboard which is comprised of 5 tables. However, sometimes it can get annoying scrolling all the way down. Is there a way that on the top of the dashboard I can have 5 hyperlinks that scroll to a particular section of the dashboard? Would this be possible through giving tables id's?  The desired functionality is much like confluence where you can put anchors throughout the page and create hyperlinks to scroll to particular sections.   Thanks, any help would be greatly appreciated!
We have a requirement to send audit logs from our host servers (/var/log/audit/audit.log) to both our indexers and to a 3rd party host syslog server. I am testing with a host gary-test2.ussl.uhs with... See more...
We have a requirement to send audit logs from our host servers (/var/log/audit/audit.log) to both our indexers and to a 3rd party host syslog server. I am testing with a host gary-test2.ussl.uhs with audit logs in /var/log/audit/audit.log. I have configured the universal forwarder host gary-test2.ussl.uhs to redirect all it's logs to the heavy forwarder. I would like to have the heavy forwarder send it's logs to the indexers but also a copy of all audit events to the syslog server syslogp01.ussl.uhs Here is the architecture involved with the routing. Universal Forwarder gary-test1.ussl.uhs Heavy Forwarder ussl-splkhfwt01.ussl.uhs Indexers splkidxt01.ussl.uhs splkidxt02.ussl.uhs Syslog server syslogp01.ussl.uhs (10.17.8.206) Here is how I configured the Heavy forwarder... splkhfwt01.ussl.uhs (heavy forwarder host) /opt/splunk/etc/apps/forwarder_syslog/local/props.conf [source::/var/log/audit/audit.log] TRANSFORMS-routing=troutingrsa /opt/splunk/etc/apps/forwarder_syslog/local/transforms.conf [troutingrsa] REGEX=. DEST_KEY=_SYSLOG_ROUTING FORMAT=Myroutingrsa /opt/splunk/etc/system/local/outputs.conf [tcpout] defaultGroup = default-autolb-group indexAndForward = 0 [tcpout:default-autolb-group] disabled = false server = splkidxt01.ussl.uhs:9997,ussl-splkidxt02.ussl.uhs:9997 [syslog:Myroutingrsa] server = 10.17.8.206:514 sendCookedData = false type = udp disabled = false What I am seeing is that the /var/log/audit/audit.log logs from host gary-test2.ussl.uhs are appearing in search queries on Splunk. Also those same logs are appearing on the syslog server. Here is the problems I found. Logs other than the audit.log logs from the host gary-test2.ussl.uhs are also appearing on the syslog server. I suspected that the props.conf and the transforms.conf were not doing their job so I remarked out all the settings in props.conf and transforms.conf then restarted splunk. I found that the logs continued to be sent to the syslog server which says the props.conf and transforms.conf files are having no effect. Just to be sure, in the outputs.conf, I removed the "[syslog:Myroutingrsa]" and it's settings. That then made the logs stop forwarding to the syslog server. Does anyone see what is wrong with my forwarding configuration settings?
Hi all, I have a field that has a time value such as (_time field): 2021-08-12 15:18:42 However, when I got to use the rename command on the _time field, it changes the format to: 1628723833 Any... See more...
Hi all, I have a field that has a time value such as (_time field): 2021-08-12 15:18:42 However, when I got to use the rename command on the _time field, it changes the format to: 1628723833 Any assistance in how to NOT make the date format change whilst also renaming the field would be greatly appreciated.