All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can only send HEC (or s2s embedded in HTML) to your Cloud HEC inputs. So in order to ingest syslog you need to have something in place on-premise to receive the syslogs and push it as something t... See more...
You can only send HEC (or s2s embedded in HTML) to your Cloud HEC inputs. So in order to ingest syslog you need to have something in place on-premise to receive the syslogs and push it as something that Cloud will accept. That can be a UF as @gcusello suggested or a SC4S or properly configured rsyslog/syslog-ng instance with HTTP output.
While I wholeheartedly agree with the "don't use regex for structured data" it's worth noting that sometimes it's not easy to extract the structured part from the whole event.
I am running a search in JavaScript that returns results similar to this one.   new SearchManager({ id: "my_search", results: true, search: ` | makeresults count=10 ... See more...
I am running a search in JavaScript that returns results similar to this one.   new SearchManager({ id: "my_search", results: true, search: ` | makeresults count=10 | streamstats count | fields - _time ` });   What I would like to obtain is a JS array with the resulting vector in a variable. I tried to solve it like so:   let search = mvc.Components.get("my_search"); let results = search.data("results"); results_outside = results.on("data", function(){ // 1b) let rows = results.data().rows; let array = rows.flat(1); // I want the flattened array, no nested one console.log("array: ", array); tokens.set("arrays", array); // 2) return array; // 1a) }); console.log("results_outside: ", results_outside);    The `array` variable within the function has the desired results, as I can tell from the console. However exporting it to the global scope neighter works by:   1) storing it in `results_outside` - this will have the same value as results.   or   2) setting it to a token.
OK. Assuming that: 1. You always have a drive letter at the beginning 2. You don't have "empty parts" (you don't have consecutive backslashes which are syntactically correct if you want to specify ... See more...
OK. Assuming that: 1. You always have a drive letter at the beginning 2. You don't have "empty parts" (you don't have consecutive backslashes which are syntactically correct if you want to specify a file path but are typically not returned as a path to existing file) 3. You want to extract the part after the first four components The regex to do so would be like that: [a-zA-Z]:\\\\([^\\]+\\){4}(?<remainder>.*) The "remainder" capture group will capture the path after first four directories. Of course if you want to do it with "rex" command in Splunk, you need to escape all backslashes which makes it something like this: | rex  "[a-zA-Z]:\\\\\\\\([^\\\\]+\\\\){4}(?<remainder>.*)"
I have "Product Brand" multiselect filter in a Splunk dashboard. It is a dynamic filter rather than static. I also have a panel displaying all product brands. Now, I want another conditional panel to... See more...
I have "Product Brand" multiselect filter in a Splunk dashboard. It is a dynamic filter rather than static. I also have a panel displaying all product brands. Now, I want another conditional panel to display further information of 3 of the brands in the product brand list if user selects any of these 3.  I know I have to set a <change> and <condition> tag in XML to toggle between the display of panel and store the selected values. I now write three condition tags with set token like this:    <change> <condition match="A"> <set token="show_product_panel">true</set> <set token="show_product">$value$</set> </condition> <condition value="B"> <set token="show_product_panel">true</set> <set token="show_product">$value$</set> </condition> <condition value="C"> <set token="show_product_panel">true</set> <set token="show_product">$value$</set> </condition> <condition> <unset token="show_product_panel"></unset> <unset token="show_product"></unset> </condition> </change>   However, I want the $show_product$ to hold multiple values instead of one, as it is a multiselect filter. How should I do so? I have tried something in each of the condition like but won't work. How can I "append" the values into $show_product$? Thanks.   <eval token="show_product">if(isnull($show_product$), $value$, $show_product$.", ".$value$)</eval>     FYI: the $show_product$ will be passed into the conditional panel like this   <row depends="$show_product_panel$"> <panel> <chart> <search> <query>index IN ("A_a", "A_b") | where match(index, "A_" + $subsidiary$) | dedup id sortby _time | eval "Product Brand" = coalesce('someFieldA', 'someFieldB') | search "Product Brand" IN ($show_product$) | timechart span=1mon count by "Product Brand"</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row>     FYI: Product Brand XML code snippet:   <input type="multiselect" token="product_brand" searchWhenChanged="true"> <label>Product Brand</label> <fieldForLabel>brand_combine</fieldForLabel> <fieldForValue>brand_combine</fieldForValue> <search> <query>index IN ("A","B") | eval brand_combine = coalesce('someFieldA','someFieldB') | search brand_combine != null | where match(index, "zendesk_ticket_" + $subsidiary$) | dedup brand_combine | fields brand_combine</query> <earliest>0</earliest> <latest></latest> </search> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <delimiter>,</delimiter> <change> <condition match="A"> <set token="show_product_panel">true</set> <set token="show_product">$value$</set> </condition> <condition value="B"> <set token="show_product_panel">true</set> <set token="show_product">$value$</set> </condition> <condition value="C"> <set token="show_product_panel">true</set> <set token="show_product">$value$</set> </condition> <condition> <unset token="show_product_panel"></unset> <unset token="show_product"></unset> </condition> </change> </input>  
The "Forwarding and Receiving" settings menu section is just a simplified version of specifying a default target group in outputs.conf. For a more complicated setup (like selectively forwarding event... See more...
The "Forwarding and Receiving" settings menu section is just a simplified version of specifying a default target group in outputs.conf. For a more complicated setup (like selectively forwarding events from single indexes or sets of indexes to specific receivers) you need to configure that in configs manually (it involves more than just defining outputs - it requires assigning proper metadata in props/transforms) - see the link provided by @gcusello .
Hi @Adpafer, yes, you can, following the instructions on the above url. Ciao. Giuseppe
Yes, that's a feature, not a bug But seriously, Splunk internally stores and processes time as a so-called "unix timestamp" which contains number of seconds since midnight Jan 1st 1970. That time... See more...
Yes, that's a feature, not a bug But seriously, Splunk internally stores and processes time as a so-called "unix timestamp" which contains number of seconds since midnight Jan 1st 1970. That timestamp does not change regardless of where the users is located an what timezone the user has set in his preferences. But the timezone is rendered according to user's preferences-set timezone. Which means that the same _time field from the event (or any other field on which you do strftime() or format) will be rendered differently for different users. Furthermore, the timerange selections are interpreted according to your local user's timezone which means that @d will mean something different depending on whether it's CET, BST, EST or whatever you can come up with. As far as I remember, there is no support for specifying a timezone definition directly in a timerange specification parameters so you need to "cheat". One possible walkaround (but a bit ugly I admit)is to use a subsearch (possibly packed into a macro) to render a timestamp in your local timezone, cut the timezone part, then append the given timezone spec and then parse the time string back to unix timestamp to get your earliest/latest value as integer. Very very ugly but it should work.
Hi Indexer can forward logs to other servers (forwarding and receiving)and I have to configure IP nad port of the host to which indexer can forward logs. I have two hosts visible in GUI (Forwardi... See more...
Hi Indexer can forward logs to other servers (forwarding and receiving)and I have to configure IP nad port of the host to which indexer can forward logs. I have two hosts visible in GUI (Forwarding and Receiving): serverA:portA serverB:portB The problem is that I do not want to send all logs to these hosts. I want to send logs from IndexA to hostA  and logs from IndexB to hostB. Can I do it or I cannot ? If yes, how? Thanks and regards, pawel  
How do we disable the mouse over items (Inspect, FullScreen, Refresh) in a dashboard studio dashboard? We would like to disable it, because it overlays other information on our dashboard and it is s... See more...
How do we disable the mouse over items (Inspect, FullScreen, Refresh) in a dashboard studio dashboard? We would like to disable it, because it overlays other information on our dashboard and it is stuck if we click on an item on the page (not disappearing when moving the mouse to another item). The mouse was hovering over "WSSP", but the mouse over item on "ARTAS-TTF3" is still visible, because that was the last clicked item.
Hi Neeraj That metric is an overall global metric for all queries that gives you what it states. Time spent doing Executions for all queries in the Database. What I would suggest is to speak to t... See more...
Hi Neeraj That metric is an overall global metric for all queries that gives you what it states. Time spent doing Executions for all queries in the Database. What I would suggest is to speak to the DBA, and have them create you a custom query which you can run under custom metrics foreach DB, which can possibly exclude this specific query to give you a value that you can use in the health rule. Ciao
You should be a bit more specific about what columns you're talking about. If you're talking about the timeline view above the events list on the search screen, the resolution of that timeline is aut... See more...
You should be a bit more specific about what columns you're talking about. If you're talking about the timeline view above the events list on the search screen, the resolution of that timeline is automatic and you can't change it.
Hi Is the "old data" just on disk and left back when you start to use a new servers or is it frozen data? r. Ismo
By "saved search" I mean that the used searches in the Dasboard are all "ds.savedSearch", which are updated by a cron schedule. So i would expect loading results from previously executed searches. B... See more...
By "saved search" I mean that the used searches in the Dasboard are all "ds.savedSearch", which are updated by a cron schedule. So i would expect loading results from previously executed searches. Browser is Firefox 102.10.0esr and also with an up to date Chrome we see the same issue. And by SVGs we are talking about simple boxes with a Text.
By "saved search" do you mean you are loading the results from a previously executed search, or re-running the search with substituted values? SVG is rendered in the browser, so your server configur... See more...
By "saved search" do you mean you are loading the results from a previously executed search, or re-running the search with substituted values? SVG is rendered in the browser, so your server configuration makes little difference here; you could try upgrading your browser environments?
What is the expected load time of a dashboard studio page in view mode, with only using saved searches? In our environment we have a dashboard page with ~140 Choropleth SVG Items, each colored by a... See more...
What is the expected load time of a dashboard studio page in view mode, with only using saved searches? In our environment we have a dashboard page with ~140 Choropleth SVG Items, each colored by a savedsearch. When loading/reloading the page, it takes 6 seconds for the overal splunk page to load, another 6 seconds to load all our SVGs and another 2 to color them. Resulting in ~14.5 seconds to load that page in total. This is running with Splunk 9.1.0.2 on an Environment with dedicated SearchHeads and Indexers on virtual machines, all NVMe Storage, plenty of RAM, ... Using a more simple dashboard (<5 SVG items and an table with a live search), the total page is loaded within 5 seconds.   Is this the expected performance? Are there any performance tweaks we could do? Things we should check/change/...?
What is the full search you are currently using (which is not giving you the results you expect)?
Hello, We are ingesting csv files from a S3 bucket using the Custom SQS based S3 input. Although, the data is pulled rightly, the fields are not getting extracted properly. The header line has been... See more...
Hello, We are ingesting csv files from a S3 bucket using the Custom SQS based S3 input. Although, the data is pulled rightly, the fields are not getting extracted properly. The header line has been ingested as a different event and the header fields are not getting extracted. I have defined the Indexed_Extractions=csv in the props.conf Is there any other way to extract a csv file from the S3 bucket? Any work around?  
Are you talking about a user preference issue or an issue in ingested data?  If data is in UTC, your user can always select UTC as their UI preference; if your application logs in a local zone AND in... See more...
Are you talking about a user preference issue or an issue in ingested data?  If data is in UTC, your user can always select UTC as their UI preference; if your application logs in a local zone AND includes zone info in data, Splunk internally still uses UTC. If data is in a different time zone but lacks zone info, that's a really bad situation.  There are several documents about how to configure time correctly.  A good place to start is Configure timestamps.  Hope this helps.
Hi @vikas_gopal, at first the configuration you defined isn't recommended by Splunk, but its isn't a production system, so it could go. About the idea to have a stand alone server containing the ol... See more...
Hi @vikas_gopal, at first the configuration you defined isn't recommended by Splunk, but its isn't a production system, so it could go. About the idea to have a stand alone server containing the old data (that are in an Indexer Cluster), you could use one of the Cluster search peers disconnecting it from the old cluster, you have to put attention to the steps to follow: disconnect from the cluster one by one all the indexers, in this way on the last remaining Indexer you'll have a copy of all the data, then you can disconnect also it from the cluster. It isn't an usual procedure and I'm not sure that it was tested, but it should work. Ciao. Giuseppe