All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have 2 separate search clusters fed by the same index cluster. The reason to have two search clusters was to host 2 sets of apps/add-ons that serve 2 different functional groups. Now we are... See more...
We have 2 separate search clusters fed by the same index cluster. The reason to have two search clusters was to host 2 sets of apps/add-ons that serve 2 different functional groups. Now we are thinking about making one big search group. We'd like to migrate all apps/add-ons, reports, dashboards etc. from 2 old clusters to the new one. Both old clusters use the same LDAP server. Anybody with similar experience?
I am trying to extract key value pairs from JSON events using rex command mysearch | rex field=_raw max_match=0 "\"(?<Key>\b\w+[^\":]*)\":(?!\s*{\[)\"*(?<Value>(?!\[{|{|\[)[^(,|}|\")]*)" I ... See more...
I am trying to extract key value pairs from JSON events using rex command mysearch | rex field=_raw max_match=0 "\"(?<Key>\b\w+[^\":]*)\":(?!\s*{\[)\"*(?<Value>(?!\[{|{|\[)[^(,|}|\")]*)" I have a single column CSV lookup with all the key names I am interested in | inputlookup my_fields_json.csv | fields FieldName Is there a way to use the lookup to make my rex command regular expression dynamic so I only extract the fields I am interested in?
I am trying to extract key value pairs from JSON events using rex command mysearch | rex field=_raw max_match=0 "\"(?<Key>\b\w+[^\":]*)\":(?!\s*{\[)\"*(?<Value>(?!\[{|{|\[)[^(,|}|\")]*)" I ... See more...
I am trying to extract key value pairs from JSON events using rex command mysearch | rex field=_raw max_match=0 "\"(?<Key>\b\w+[^\":]*)\":(?!\s*{\[)\"*(?<Value>(?!\[{|{|\[)[^(,|}|\")]*)" I have a single column CSV lookup with all the key names I am interested in | inputlookup my_fields_json.csv | fields FieldName Is there a way to use the lookup to make my rex command regular expression dynamic so I only extract the fields I am interested in?
I'm new to data models and have a very newbie question. We are using SplunkCloud and when I try to add an auto-extracted field to the dataset, I only see a partial lists of fields. How do I scroll ... See more...
I'm new to data models and have a very newbie question. We are using SplunkCloud and when I try to add an auto-extracted field to the dataset, I only see a partial lists of fields. How do I scroll down or go to next page when trying to add fields to the "Add Auto-Extracted Field" window?
Hello, I'm starting this post with a shoutout to @niketn because I used one of his post as the basis for my dashboard. However, input from any/all Splunkers out there is appreciated. Here is my ... See more...
Hello, I'm starting this post with a shoutout to @niketn because I used one of his post as the basis for my dashboard. However, input from any/all Splunkers out there is appreciated. Here is my code, modified from niketnilay's post. How to show table result in one page/ table modification <row> <panel> <title>Show Table Results in One Page</title> <table> <search> <query> index=oit_printer_monitoring AND type=Printer | eval timeConvDate=strftime(_time,"%a %m-%d-%Y") | eval timeConvTime=strftime(_time,"%H:%M:%S") | eval statusNum=case(status="printing,deleting,error",4,status="error,toner low",4,status="printing,error",4,status="paper jam",4,status="no toner",4,status="error,offline",4,status="error",4, status="door open,error",3,status="spooling,paused",3,status="paused",3,status="out of paper",3,status="error,out of paper",3,status="offline",3,status="door open",3, status="toner low",2,status="restarted",2, status="printing,deleting",1,status="printed,deleting",1,status="printing,printed,deleting",1,status="error,warming up",1,status="spooling,printing",1,status="warming up",1,status="spooling",1,status="printing",1,status="normal",1) | sort - statusNum, status | fields printer, status, statusNum, timeConvDate, timeConvTime | dedup printer | eval printer="#".printer | eval component=mvzip(printer,mvzip(status,mvzip(timeConvDate,timeConvTime,"..."),"..."),"...") | stats values(component) as component | nomv component | eval component="<div>".replace(component,"#","</div><div>")."</div>" | makemv delim="..." component | table component</query>...... Explanation. Create two new fields for time and date. | eval timeConvDate=strftime(_time,"%a %m-%d-%Y") | eval timeConvTime=strftime(_time,"%H:%M:%S") Create a new field to represent the status as a number that can be sorted later. | eval statusNum=case(status="printing,deleting,error",4,status="error,toner low",4,.... Append a "#" in front of my first field, printer, to establish the beginning of a new field component. | eval printer="#".printer The field **component* will be concatenation of several other fields: printer, status, and the timeConvDate and timeConvTime fields as one. | eval component=mvzip(printer,mvzip(status,mvzip(timeConvDate,timeConvTime,"..."),"..."),"...") I used "..." instead of "|" between each of the concatenated fields. The next two lines are from niketnilay's code unchanged. | stats values(component) as component | nomv component ' Modified niketnilay's eval/replace code as follows: | eval component="<div>".replace(component,"#","</div><div>")."</div>" By using the "..." all 4 fields stay associated to each other, and the "#" indicates of the start of the field **component**. Replaced the "..." within the field component with a carriage return/newline. | makemv delim="..." component ` Before continuing, I have a question about the field named *component. When I substituted ***component* with my field name, printerResult, it did not work. I had substituted in the js code as well. Therefore, I used component instead. My question is, in the js code, the word Components is used (plural and upper case C). Is this a special variable name for js and that is why it was used?* I am using display_token_with_html_content.js as written. require([ "jquery", "splunkjs/mvc", "splunkjs/mvc/simplexml/ready!" ], function ( $, mvc ) { var defaultTokenModel = mvc.Components.get("default"); defaultTokenModel.on("change:tokResultsInHTML", function (model, tokResultsInHTML, options) { if (tokResultsInHTML !== undefined) { $("#htmlTokenContainer").html(tokResultsInHTML); } }); }); I used niketnilay's css in a panel <panel> <title>Applying HTML Style</title> <html> <style> #htmlTokenContainer{ display:flex; flex-wrap:wrap; } #htmlTokenContainer div{ width: 260px; } </style> $tokResultsInHTML$ </html>.... Lastly, I created another panel with just the token value assigned earlier per the niketnilay example. <panel> <title>Format Data as desired output and show as html</title> <html> <div id="htmlTokenContainer"> </div> </html>.... Here is a screenshot of my results. What I am trying to accomplish is placing each value of component into individual cells. The formatting would be all centered. As the final requirement click the printer name in any one of the cells would set a token, which displays another panel (depends="token" ) and populates the panel with data specific to the printer clicked. Thanks and God bless, Genesius
My company's Splunk instance is located on Splunk Cloud and authentication to Splunk Cloud is via an ADFS federation server. I have downloaded the Splunk PowerShell Resource Kit and PowerShell sear... See more...
My company's Splunk instance is located on Splunk Cloud and authentication to Splunk Cloud is via an ADFS federation server. I have downloaded the Splunk PowerShell Resource Kit and PowerShell search cmdlets from Splunk. Using the documentation provided I am trying to connect to Splunk Cloud and run a query by running the following command: $a = get-credential search-splunk -Credential $a -host company.splunkcloud.com -searchstring 'search stuff' As I do not have a Splunk Cloud account and when using the Splunk Cloud website I am redirected to my company's ADFS server first for authentication before I can interact with website. So I am not sure how to authenticate to Splunk Cloud so I can use the PowerShell cmdlets? Any help with this would be most appreciated. Thanks, Joel
Hi, is there a way to trace the origin of a specific value in Slunk? Currently I am trying to figure out with eventtype, lookup or eval is setting a tag and a field value for some events in Splunk... See more...
Hi, is there a way to trace the origin of a specific value in Slunk? Currently I am trying to figure out with eventtype, lookup or eval is setting a tag and a field value for some events in Splunk. I used the btool the figure out, if the are some evals. But they do not apply. I found some lookups, but these do not contain the value I am looking for. A code trace or data lineage function would be very helpfull sometimes. Does anyone know a function in Splunk or an app for this? Thank you.
I would like to keep this layout regardless of screen size when I go to a smaller screen it adjusts to this: how can I fix its layout so I does not change layout on different screen... See more...
I would like to keep this layout regardless of screen size when I go to a smaller screen it adjusts to this: how can I fix its layout so I does not change layout on different screens TIA
I've heard that using AWS Lambda is a great way to get high volumes of data directly into Splunk without the overhead managing hardware. It seems like a great solution, can you provide an overview to... See more...
I've heard that using AWS Lambda is a great way to get high volumes of data directly into Splunk without the overhead managing hardware. It seems like a great solution, can you provide an overview to help me get started?
Hi @meenakande, if you have Splunk Cloud, you surely have an Heavy Forwarder to concentrate logs before sending to Splunk Cloud. So you can use it to take the syslogs from VM-Ware. If you haven't... See more...
Hi @meenakande, if you have Splunk Cloud, you surely have an Heavy Forwarder to concentrate logs before sending to Splunk Cloud. So you can use it to take the syslogs from VM-Ware. If you haven't an Heavy Forwarder (usually they are at least two) I suggest to review your architecture! Ciao. Giuseppe
Guys, have one old Splunk 6.1.4 instance that I cannot decommission for a while (won't go into details ). I was looking at customizing the field colors some of my simple XML generated tables using... See more...
Guys, have one old Splunk 6.1.4 instance that I cannot decommission for a while (won't go into details ). I was looking at customizing the field colors some of my simple XML generated tables using the tables examples here https://splunkbase.splunk.com/app/1603/ but this example works only for single value fields and mine are multi-value. Anyone have an example for mutli-value fields? I'm a JavaScript noob!
In my use-case my source log (tailed by a monitor input stanza) is being archived once a day at midnight and the resulting archive file is tailed by the same input stanza and the the original source ... See more...
In my use-case my source log (tailed by a monitor input stanza) is being archived once a day at midnight and the resulting archive file is tailed by the same input stanza and the the original source log is being deleted. What I noticed is, if the splunk instance monitoring that source goes down while new events are still being written to the source log and if the splunk instance comes back up again only after the original file has been archived and the source log deleted, then the Archiving processor doesn’t verify if any new unread events can be found within the archive which the Tailreader couldn’t read (as during that time the splunk instance was down), please check following example: 02-05-2020 12:53:00.442 +0000 INFO ArchiveProcessor - Handling file=/etc/ArchiveFolder/sourcelog5.log.gz 02-05-2020 12:53:00.443 +0000 INFO ArchiveProcessor - reading path=/etc/ArchiveFolder/sourcelog5.log.gz (seek=0 len=784) 02-05-2020 12:53:00.499 +0000 INFO ArchiveProcessor - Archive with path="/etc/ArchiveFolder/sourcelog5.log.gz" was already indexed as a non-archive, skipping. 02-05-2020 12:53:00.499 +0000 INFO ArchiveProcessor - Finished processing file '/etc/ArchiveFolder/sourcelog5.log.gz', removing from stats 02-05-2020 13:01:31.503 +0000 INFO WatchedFile - Will begin reading at offset=12392 for file='/etc/ArchiveFolder/sourcelog5.log.gz'. Based on the documentation: https://docs.splunk.com/Documentation/Splunk/8.0.2/Data/Howlogfilerotationishandled I would understand that both the Tailing and the Archiving processor should behave the same, but apparently that is not the case here. I also did the complementary test and extracted again the source log within the archive and at that point the Tailing processor realises that there are effectively still some new unread events and it it will start ingesting those at that stage. Why is the Archiving processor missing those new unread events?
We have a splunk cloud in our environment and how do i setup a vmware logs to forward to splunk cloud with out installing any Splunk universal forwarder on esxi server?
Hi This might appear to be a silly question especially because I believe I know the answer... The scenario is that I have a JVM Tier instrumented with a Java Agent. This tier has some BT's that I d... See more...
Hi This might appear to be a silly question especially because I believe I know the answer... The scenario is that I have a JVM Tier instrumented with a Java Agent. This tier has some BT's that I do not want to instrument. Some of these generates a lot of traffic even though it's not of interest for my business monitoring.  So I go into the controller and exclude these BT's. Now my question is: Will the JVM-agent know that I have excluded these BT's and stop reporting metrics for these transactions?  Or will it still continue to send the "excluded" metrics and leave it to the Controller to filter out these BT's?
I want to know What is the price for SPLUNK APP FOR VMWARE after the 60-DAY FREE TRIAL? I checked splunkbase and this is the answer THE SPLUNK APP FOR VMWARE IS AVAILABLE FOR A 60-DAY FREE TRIA... See more...
I want to know What is the price for SPLUNK APP FOR VMWARE after the 60-DAY FREE TRIAL? I checked splunkbase and this is the answer THE SPLUNK APP FOR VMWARE IS AVAILABLE FOR A 60-DAY FREE TRIAL. If you wish to use the Splunk App for VMware beyond the trial, please contact your Splunk sales person.
i am trying to find out the wget link for Barracuda WAF/ADC Add-on for Splunk but not able to find it below add-on https://splunkbase.splunk.com/app/3776/
I am currently working on a dashboard that will simplify the search for some of our office personnel who are not Splunk savvy. I am creating a dashboard that has three drop down and six text inputs i... See more...
I am currently working on a dashboard that will simplify the search for some of our office personnel who are not Splunk savvy. I am creating a dashboard that has three drop down and six text inputs in order to ensure they are getting the specific results they are looking for, however I am not sure if the code is working and why the submit button are not working. I am very new to XML and any help would be appreciated. my code: Splunk made simple <input type="time" token="token_time"> <label>Date and Time</label> <default> <earliest>@d</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="source_token"> <label>Select Source</label> <choice value="calories-3514">choclates</choice> <choice value="calories-1445">veggies</choice> <choice value="calories-4461">cakes</choice> <choice value="calories-5447">pies</choice> </input> <input type="dropdown" token="food_token"> <label>Food</label> <choice value="*">All</choice> <fieldForLabel>Food</fieldForLabel> <fieldForValue>food_name</fieldForValue> <search> <query>index=goodies source=$source_token$ | stats count by food_name</query> <earliest>@d</earliest> <latest>now</latest> </search> </input> <input type="text" token="user_tok"> <label>Who</label> <default>$user_tok$</default> </input> <input type="text" token="uid_tok"> <label>Trainer</label> <default>$uid_tok$</default> </input> <input type="text" token="src_tok"> <label>First meal</label> <default>$src_tok$</default> </input> <input type="text" token="dst_tok"> <label>Second Meal</label> <default>$dst_tok$</default> </input> <input type="text" token="junkfood_tok"> <label>third meal</label> <default>$junkfood_tok$</default> </input> <input type="text" token="snack_tok"> <label>Snack</label> <default>$snack_tok$</default> </input> <panel> <title>Panel 1</title> <table> <search> <query>index=goodies source=$token_time$ | fillnull value=NULL food_name, user, uid, src, dst, junkfood, snack | table food_name, user, uid, src, dst, junkfood, snack | $time_token.earliest$ $time_token.latest$ 10 none none true
I got an alert that some of the indexes buckets have been frozen due to size. How to get the bucket details. I mean the date of the buckets that got frozen?
Hi All, I am looking for a way to display the events which appeared before a particular error is written into the log files (for that particular error there is a configured alert). That's the al... See more...
Hi All, I am looking for a way to display the events which appeared before a particular error is written into the log files (for that particular error there is a configured alert). That's the alert: index=**** message="Interface Broker Configuration Service error: No result retrieved from config service" First I tried with localize but was not able to get a result, only this one: And I don't know exactly how to filter the events which are written in the same log 5s or 10 s earlier before the error message occurs. Another option was the combination of eval + map based on other questions posted on Splunk community but Splunk was not able to return a value: | eval starttime=_time-180 | eval endtime=_time+1 | map search="search index=* earliest=$starttime$ latest=$endtime$" Do you have any suggestions? Cheers, Konstantin
will at the time of index creation the defined disk size allocated once. or it is allocated based on index utilization.