All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a SC4S server setup receiving info from our Network UPS.  I have created a new index for any date to do with our UPS in Splunk.  I went into the SC4S server and modified the compliance... See more...
Hello, I have a SC4S server setup receiving info from our Network UPS.  I have created a new index for any date to do with our UPS in Splunk.  I went into the SC4S server and modified the compliance_meta_by_source.conf and compliance_meta_by_source.csv files.   When I add the entry for the new index the info stops coming to our Splunk environment.   If I remove it it starts coming over again.   What am I doing wrong.  If I leave the .splunk.index portion out the info goes over with the new source type and also creates the new field in splunk, but as soon as I add the index part the info stops going over.  Below is the info that I have put in both files. compliance_meta_by_source.conf filter f_powerware_ups { host("my ups IP address" type(glob)) }; compliance_meta_by_source.csv f_powerware_ups,.splunk.sourcetype,"powerware_ups" f_powerware_ups,fields.vendor,"Eaton" f_powerware_ups,.splunk.index,"netups"  
Hi all, I have following situation. We had an indexer cluster with 4 peers were is currently still enough storage on our SSD's, so the home, cold and thawed path is for all the same(/data/<index_... See more...
Hi all, I have following situation. We had an indexer cluster with 4 peers were is currently still enough storage on our SSD's, so the home, cold and thawed path is for all the same(/data/<index_name>/(colddb|db|thaweddb)). Now we will extend the storage with HDD and plan to migrate the cold and thawed path for all indexes to a different storage location(/archive/<index_name>/(colddb|thaweddb)). Now is the question how should it work ? I want to minimize the downtime so i would prepare the new locations on all 4 indexer peers and would already do a copy of all buckets in cold and thaweddb to the new location. Now the question can i reduce the bucket roll activity with starting maintenance mode?! So i would activate the maintenance mode make again a copy to get all bucket files on the same state, now i would adjust the indexes.conf and initiate a rolling restart afterwards i would disable the maintenance mode. But does it work to make a rolling restart when maintenance mode is active? Or do i only have to copy the files to new location change indexes.conf and restart, but what is if a bucket roll take place from warm to cold, can i copy the files again from old to new directory and restart again, because i do not want to loose any data. please give me an advice because i did not find any information in the documentation when and how to restart the indexer cluster in such case. kind regards Kathrin
After we upgraded from 8.0.7 to 8.2.3, we are having lots of problems with search performance.  We noticed that the analytics workspace changed a great deal and we wonder if that could be causing the... See more...
After we upgraded from 8.0.7 to 8.2.3, we are having lots of problems with search performance.  We noticed that the analytics workspace changed a great deal and we wonder if that could be causing the performance issue.   Now we have lots of searches queued - that didn't happen before.  Also sometimes the maximum number of historical searches is exceeded and we end up having to restart Splunk.  After the restart, our system runs okay for a while. Any help will be appreciated.
Hello all, I am trying to integrate trendmicro iwsva logs to splunk. It is showing as a supported device but i am unable to find any TA . Can someone suggest how to procees. Currently I have DDA/DDI ... See more...
Hello all, I am trying to integrate trendmicro iwsva logs to splunk. It is showing as a supported device but i am unable to find any TA . Can someone suggest how to procees. Currently I have DDA/DDI logs being ingested for which the logs are properly parsed.I am using the same sourcetype for iwsva logs aswell i.e. cefevents   Can someone please suggest what should I use here. Thanks in advance for the help
I have a table (that is a spitted URL) in the following format:   field1 field2 field3 field4 field5 field6 aaaaa 11111 qqqqq aaaaaa tttttt yyyyyy aaaaa 11111 cccccc rrrrrrr ... See more...
I have a table (that is a spitted URL) in the following format:   field1 field2 field3 field4 field5 field6 aaaaa 11111 qqqqq aaaaaa tttttt yyyyyy aaaaa 11111 cccccc rrrrrrr     bbbbb 22222 rrrrrrrrr iiiiiiiiiii vvvvvv   ccccc 22222 wwwww ttttttttt     ddddd 33333 444444 5555555       Ans the other table has only some of the columns: field1 field2 field3 field4 Name Description ccccc 22222     Mickey Mouse aaaaa 11111     Pinky Brain ddddd 33333 444444   ZZ Top   I need that the rows in the second table with be marched to first one, when the values of the second table have only a "base" values. This is what I expect to get: field1 field2 field3 field4 field5 field6 Name Description aaaaa 11111 qqqqq aaaaaa tttttt yyyyyy Pinky Brain aaaaa 11111 cccccc rrrrrrr     Pinky Brain aaaaa 11111 qqqqq aaaaaa tttttt yyyyyy Pinky Brain aaaaa 11111 cccccc rrrrrrr     Pinky Brain bbbbb 22222 rrrrrrrrr iiiiiiiiiii vvvvvv   ZZ Top ccccc 22222 wwwww ttttttttt     Mickey Mouse ddddd 33333 444444 5555555     ZZ Top   | join type=left field1 field2... [] It make sense that when I do a left join it looks for a corresponding values in all fields, and if it's not there... I have no results How can I solve it? Thanks
Hi Team, I am stuck with a query that is not working. I have set up a summary index that collects data every 1 hour and every 15min. I have a field 'isCostChanged' which I want to count basis 'Yes... See more...
Hi Team, I am stuck with a query that is not working. I have set up a summary index that collects data every 1 hour and every 15min. I have a field 'isCostChanged' which I want to count basis 'Yes' and 'No' in Summary Index. Using this query : index=summary-my-sumdata splunk_server_group=default reporttype=costchangecount reporttime=fifteenmin isCostChanged=* | stats sum(count) as Total, sum(eval(isCostChanged="true")) as CostChanged, sum(eval(isCostChanged="false")) as NoCostChanged by CountryCode | eval CostChangeRatio=round((CostChanged/Total)*100,2) | eval NoCostChangeRatio=round((NoCostChanged/Total)*100,2) | fields CountryCode, NoCostChanged, CostChanged, CostChangeRatio   What its doing - Total count is correct but the count for isCostChanged=true and =false is not correct, the count is less if I do this below to verify the data, the count is correct | stats sum(count) as Total by isCostChanged Can you help how to achieve this  Thanks in advance  Nishant
Hi,  Wondering if anyone can help.  I am trying to create a new field called FS_Owner_Mail using |eval from both the mail and FS_Owner existing fields but not too sure how to work it into the below... See more...
Hi,  Wondering if anyone can help.  I am trying to create a new field called FS_Owner_Mail using |eval from both the mail and FS_Owner existing fields but not too sure how to work it into the below search.   index=varonis sourcetype=xxx:varonis:csv:reports | eval User_Group=replace(replace('User_Group',"xxxxl\\\\","")," ","") | join type=left User_Group [ search index=ad source=xxx_adgroupmemberscan memberSamAccountName="*_xxx" earliest=-48h | dedup groupSamAccountName, memberSamAccountName | rename groupSamAccountName as User_Group, memberSamAccountName as Member | join type=left Member [ search index=ad source="xxx_aduserscan" samAccountName="*_xxx" | dedup samAccountName | rename samAccountName as Member | table Member, displayName, mail] | stats values(Member) as Member, values(displayName) as DisplayName, values(mail) as Mail by User_Group | eval User_Group=replace(replace('User_Group',"_xxx","")," ","")] | table Access_Path Current_Permissions, DisplayName, FS_Owner, Flags, Inherited_From_Folders, Mail, Member, User_Group
Hello   We want to connect one Kafka machine with Splunk connect for Kafka but it's throwing the below error in file: config/connect-distributed.properties.     [Worker clientId=connect-1, ... See more...
Hello   We want to connect one Kafka machine with Splunk connect for Kafka but it's throwing the below error in file: config/connect-distributed.properties.     [Worker clientId=connect-1, groupId=MySuperSystemID_group] Uncaught exception in herder work thread, exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:324) org.apache.kafka.connect.errors.ConnectException: Could not look up partition metadata for offset backing store topic in allotted period. This could indicate a connectivity issue, unavailable topic partitions, or if this is your first use of the topic it may have taken too long to create. at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:184) at org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:145) at org.apache.kafka.connect.runtime.Worker.start(Worker.java:197) at org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:128) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:310) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)   The configuration looks like this: group.id=MySuperSystemID_group bootstrap.servers=https://x.x.x.x:9093 #key.converter=org.apache.kafka.connect.storage.StringConverter #value.converter=org.apache.kafka.connect.storage.StringConverter # SSL Configuration security.protocol=SSL ssl.client.auth=required # enable two way SSL ssl.endpoint.identification.algorithm = ssl.key.password=password ssl.keystore.location=/root/files/keystore.jks ssl.keystore.password=password ssl.keystore.type = JKS ssl.truststore.location=/root/files/truststore.jks ssl.truststore.password=password ssl.enabled.protocols=TLSv1.2,TLSv1.1 ssl.truststore.type=JKS   Can someone please guide me over here?
hi, I have a dashboard which generate me the count of search results into a token -  <done><set token="count_results">$job.resultCount$</set></done> I use that token inside a title of a panel -  ... See more...
hi, I have a dashboard which generate me the count of search results into a token -  <done><set token="count_results">$job.resultCount$</set></done> I use that token inside a title of a panel -  <title>$count_results$ Accounts</title>  once I schedule the dashboard as PDF - the title of that panel shows: $count_results$ Accounts Do you have any workaround for that requirement? to show count of results inside a title of scheduled PDF dashboard?   thanks.
Hello All,  1) I would like to add radio button / any way to select - one of the results of my below REST query search, QUERY :    |rest /services/data/ui/views | table id label updated "eai:userNa... See more...
Hello All,  1) I would like to add radio button / any way to select - one of the results of my below REST query search, QUERY :    |rest /services/data/ui/views | table id label updated "eai:userName" "eai:data" "eai:appName" 2) This Query search is saved as a dashboard (auto-refresh) and I have added few text boxes (User Name, Commit Branch, User Token) as show in the attached image. These text boxes will be manually filled by user.     Use Case: I need to choose any one row via radio button (or any other technical way) and then click on the SUBMIT button to send the selected row data and text box (manually entered by user) data to my custom python script.  What is the way to achieve this use case in Splunk, Any help on this is appreciated.  Thanks!
Hello All, I have to load balance the https requests over indexer cluster.  Need to know the best approach to load balance the data. Is NGNIX is only solution?
I have a data field categ_hierarchy in the format of a series of up to 8 category IDs joined by ">>". For example: categ_id1>>categ_id2>>categ_id3>>...>>categ_id8  Category id 1 is required but cat... See more...
I have a data field categ_hierarchy in the format of a series of up to 8 category IDs joined by ">>". For example: categ_id1>>categ_id2>>categ_id3>>...>>categ_id8  Category id 1 is required but category ids 2 through 8 are optional.  Category Ids are strings without the '>' cahracter in them. They might have whitespace that I want to trim from the beginning or end.  Here are 2 examples: simulink/index>>simulink/simulink-environment>>simulink/programmatic-modeling support/parallel-221>>parallel-computing/index I want to lookup each category id and get back the associated category name for each category id and then reconstruct the category names into a similarly formated path:  categ_name1>>categ_name2>>categ_name3>>...>>categ_name8  Here are the two examples constructed from the lookup results Simulink >> Simulink Environment Fundamentals >> Programmatic Model Editing Parallel Computing >> Parallel Computing Toolbox Is there any way to simplify my SPL from this?  | rex field=category_hierarchy "(?<categ_id1>[^>]+)(>>(?<categ_id2>[^>]+))?(>>(?<categ_id3>[^>]+))?(>>(?<categ_id4>[^>]+))?(>>(?<categ_id5>[^>]+))?(>>(?<categ_id6>[^>]+))?(>>(?<categ_id7>[^>]+))?(>>(?<categ_id8>[^>]+))?" | eval categ_id1=trim(categ_id1), categ_id2=trim(categ_id2), categ_id3=trim(categ_id3), categ_id4=trim(categ_id4), categ_id5=trim(categ_id5), categ_id6=trim(categ_id6), categ_id7=trim(categ_id7), categ_id8=trim(categ_id8) | lookup category_lookup category_id AS categ_id1 OUTPUTNEW category_name AS categ_name1 | lookup category_lookup category_id AS categ_id2 OUTPUTNEW category_name AS categ_name2 | lookup category_lookup category_id AS categ_id3 OUTPUTNEW category_name AS categ_name3 | lookup category_lookup category_id AS categ_id4 OUTPUTNEW category_name AS categ_name4 | lookup category_lookup category_id AS categ_id5 OUTPUTNEW category_name AS categ_name5 | lookup category_lookup category_id AS categ_id6 OUTPUTNEW category_name AS categ_name6 | lookup category_lookup category_id AS categ_id7 OUTPUTNEW category_name AS categ_name7 | lookup category_lookup category_id AS categ_id8 OUTPUTNEW category_name AS categ_name8 | eval category_name_hierarchy=categ_name1 | eval category_name_hierarchy=if(isnull(categ_name2), category_name_hierarchy, category_name_hierarchy." >> ".categ_name2) | eval category_name_hierarchy=if(isnull(categ_name3), category_name_hierarchy, category_name_hierarchy." >> ".categ_name3) | eval category_name_hierarchy=if(isnull(categ_name4), category_name_hierarchy, category_name_hierarchy." >> ".categ_name4) | eval category_name_hierarchy=if(isnull(categ_name5), category_name_hierarchy, category_name_hierarchy." >> ".categ_name5) | eval category_name_hierarchy=if(isnull(categ_name6), category_name_hierarchy, category_name_hierarchy." >> ".categ_name6) | eval category_name_hierarchy=if(isnull(categ_name7), category_name_hierarchy, category_name_hierarchy." >> ".categ_name7) | eval category_name_hierarchy=if(isnull(categ_name8), category_name_hierarchy, category_name_hierarchy." >> ".categ_name8) | table category_hierarchy, category_name_hierarchy I know I could split the category_hierachy field by the ">>" delimeter but I don't know how to lookup each of the category Ids in the resulting multivalue field.  Any help would be appreciated!! Thanks, Rena
Hi All, I have two dashboards, dashboard 1 and dashboard 2. I have linked them. When clicking on host from a line chart from dashboard 1, dashboard 2 opens up and filters on the selected host from ... See more...
Hi All, I have two dashboards, dashboard 1 and dashboard 2. I have linked them. When clicking on host from a line chart from dashboard 1, dashboard 2 opens up and filters on the selected host from dashboard 1. So far, dashboard 2 shows the correct host on the multiselect input. The issue is that I somehow override the panels in dashboard 2 with those of dashboard 1.  It may be an issue with the token, since I have tok_host=$tok_host$ on both dashboard's 1 and 2, but I am not sure if that is causing the issue. Any advise is welcome.  Thanks in advance    Dashboard 1 Input for multiselect: <input type="multiselect" token="tok_host" searchWhenChanged="true"> <label>Select Server (Multi Select)</label> <search> <query> (index=test_*_idx) |fields + host | stats values(host) as host | mvexpand host | rename host as tok_host </query> </search> <prefix>host IN (</prefix> <valuePrefix></valuePrefix> <valueSuffix></valueSuffix> <delimiter> , </delimiter> <suffix>)</suffix> <choice value="*">All</choice> <fieldForLabel>tok_host</fieldForLabel> <fieldForValue>tok_host</fieldForValue> </input> Linking the dashboard code: <drilldown> <link target="_blank">/app/XYZ_Sun_Sys/Dashboard2?form.tok_host=$click.name2$</link> </drilldown> Dashboard 2 Multiselect input code: <input type="multiselect" token="tok_host" searchWhenChanged="true"> <label>Select Server</label> <search> <query>(index=text_idx) |fields + host | stats values(host) as host | mvexpand host | rename host as tok_host</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <fieldForLabel>Select Host</fieldForLabel> <fieldForValue>tok_host</fieldForValue> <choice value="*">All</choice> <delimiter> ,</delimiter> <default>*</default> </input> Dashboard 2 code for panel </fieldset> <row> <panel depends="$tok_host$"> <title> First Panel - $tok_host$ </title> <single> <title>Space Avail</title> <search> <query>index=testing_idx host=$tok_host$ |timechart span=10min avg(Speed) as speed | eval change=_time</query>  
We use the Splunk ServiceNow TA - both on collecting data from ServiceNow and creating incidents via the Splunk alert action.   We have use case on the collection side. Within the inputs.conf there ... See more...
We use the Splunk ServiceNow TA - both on collecting data from ServiceNow and creating incidents via the Splunk alert action.   We have use case on the collection side. Within the inputs.conf there is attribute available call filter_data. This allows you to filter on the data you wish/not wish to collect from ServiceNow. The specific use case is where we do NOT want to collect events from sys_audit table if sys_created_by=user. System.  The basic stanza attributes in inputs.conf within the Snow TA  is this: [snow://sys_audit] filter_data = sys_created_by!=user.system table = sys_audit This approach does not filter sys_created_by, that is, we still see user.system as sys_created_by in our events.  Is there anything I'm doing wrong? Thx.     
I've got some queries I need to do periodically that use the exact same base search, one with teh weekly uniques and one with the average daily uniques. I can do these seperately: (search) | stats ... See more...
I've got some queries I need to do periodically that use the exact same base search, one with teh weekly uniques and one with the average daily uniques. I can do these seperately: (search) | stats dc(thing) as WeeklyCount and (search) |bucket _time span=day |stats dc(thing) as DailyCount by _time |stats avg(DailyCount)   I've tried variations on appendpipe, but can't get it to work.  example: (search) | stats dc(thing) as WeeklyCount |appendpipe [ bucket _time span=day |stats dc(thing) as DailyCount by _time |stats avg(DailyCount)] returns only WeeklyCount. If I switch the order and have weeklycount in the append pipe, it gives my the correct average daily, but weekly reports as 0
I have field with filename  containing .tgz file. I need to check if a particular file example XYZ exists inside this .tgz file.  How can I do this? Thanks in advance.  
I have 2 type of search messages - Problem #1 Problem #5 and other one goes like this - Solved problem_id successful: 1 Solved problem_id successful: 2 Solved problem_id successful: 3   I wan... See more...
I have 2 type of search messages - Problem #1 Problem #5 and other one goes like this - Solved problem_id successful: 1 Solved problem_id successful: 2 Solved problem_id successful: 3   I want to return Problems which have not been solved yet. So in the above case it should result with 5 only. What I tried :     Search1 ==> index="production" "Problem #" earliest=-3h latest=-1h | rex field=message ".*Problem #(?<problem_id>.*):.*" | stats count by problem_id |table problem_id     Extracting all Problem Ids(works fine) Search 2==>      search index="production" "Solved problem_id successful:" earliest=-3h | rex field=message ".*Solved problem_id successful: (?<problem_id>.*)" | stats count by problem_id |table problem_id     Extracting all problem ids which have been solved. (works fine )   Now to find problems which are not solved --> search1 | search not [search2]      index="production" "Problem #" earliest=-3h latest=-1h | rex field=message ".*Problem #(?<problem_id>.*):.*" | stats count by problem_id |table problem_id | search NOT [search index="production" "Solved problem_id successful: " earliest=-3h | rex field=message ".*Solved problem_id successful: (?<problem_id>.*)" | stats count by problem_id |table problem_id ]     Now above query doesn't work and for some reason just returns the result from search1. It seems not is not working for some reason. Thanks in advance. PS: I have modified the queries little bit on the fly to remove sensitive info.  
I have two dropdowns.  I only want to run a single dropdown everytime for a search. Closed Dropdown has token value as field1 OPEN Dropdown has token value as field4 In the dashboard's source, I h... See more...
I have two dropdowns.  I only want to run a single dropdown everytime for a search. Closed Dropdown has token value as field1 OPEN Dropdown has token value as field4 In the dashboard's source, I have listed below query, so that the panel picks up the data as per input provided in dropdowm index=*|lookup bar.csv IP OUTPUT BAR BAR_Status |search BAR="$field1$" OR BAR="$field4$"|chart count(IP) by BAR,STATUS So If select closed dropdown the chart should provide the details only related to Closed and vice-versa. Please help.
I have read on Splunk.com that Ent. reports don't satisfy use cases the ones on the ES. And that they should not be copied or synched to ES. Please tell me why? Thanks a million