All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We need more information.  How data will be ingested each day?  How long will that data be retained?  How much searching will the system perform? If you have a single indexer then there is no need f... See more...
We need more information.  How data will be ingested each day?  How long will that data be retained?  How much searching will the system perform? If you have a single indexer then there is no need for a Cluster Manager (f.k.a. Cluster Master) and the search head can serve as the License Manager on such a small system.  If larger ingest amounts and for better search performance, multiple indexers may be needed, which call for a Cluster Manager. Syslog data should not sent directly to a Splunk process.  Instead, send it to a dedicated syslog server (rsyslog or syslog-ng) and write it to disk.  Have a Splunk Universal Forwarder monitor the disk and forward the data to the indexer(s).
This is easy to do in Dashboard Studio. Either use absolute or grid layouts.
We are building a small Splunk installation in Azure and I'm not sure what the architecture should look like. The client came up with the idea based on the following link - Deploying Splunk on Micros... See more...
We are building a small Splunk installation in Azure and I'm not sure what the architecture should look like. The client came up with the idea based on the following link - Deploying Splunk on Microsoft Azure. They created an indexer, a search head, and a license server/cluster master. We do need to ingest syslog data from Meraki devices, so I wonder whether we need a heavy forwarder. Any thoughts?
I want to use HTML on multiple panels in order to create a custom layout of my Splunk Dashboard. I want to use this layout where each rectangle is a panel - Please advise. Is this possible to im... See more...
I want to use HTML on multiple panels in order to create a custom layout of my Splunk Dashboard. I want to use this layout where each rectangle is a panel - Please advise. Is this possible to implement in a Splunk Dashboard?
To be fully honest - I have no idea what you want to do. Please post a sample of your incoming data and tell us where you want it broken into separate events.
The Total on the y-axis comes from the first column listed in your results, so replace that with a column with a space for a name | rex field=message "IamExample(?<total>).*" | rex field=message ".*... See more...
The Total on the y-axis comes from the first column listed in your results, so replace that with a column with a space for a name | rex field=message "IamExample(?<total>).*" | rex field=message ".*ACCOUNT(?<accountreg>.*):" | rex field=message ".*Login(?<login>.*)" | rex field=message ".*Profile(?<profile>)" | rex field=message ".*Card(?<card>)" | rex field=message ".*Online(?<online>) " | stats count(total) as "_Total" count(accountreg) as "Account" count(login) as "Login" count(profile) as "Profile" count(card) as "Card" count(online) as "Online" | foreach * [| eval name="<<FIELD>>: ".round(100*<<FIELD>>/_Total, 2)."%" | eval {name} = <<FIELD>>] | table " " Account:* Login:* Profile:* Card:* Online:*
index=test pod=poddy1 "severity"="INFO" "message"="IamExample*" | rex field=message "IamExample(?<total>).*" | rex field=message ".*ACCOUNT<accountreg>.*):" | rex field=message ".*Login(?<login... See more...
index=test pod=poddy1 "severity"="INFO" "message"="IamExample*" | rex field=message "IamExample(?<total>).*" | rex field=message ".*ACCOUNT<accountreg>.*):" | rex field=message ".*Login(?<login>.*)" | rex field=message ".*Profile(?<profile>" | rex field=message ".*Card(?<card>)" | rex field=message ".*Online(?<online>) " | stats count(total) as "Total" count(accountreg) as "Account" count(login) as "Login" count(profile) as "Profile" count(card) as "Card" count(online) as "Online " Choosing a bar chart to display has "Total" show on the left hand side is there a way remove it? also hovering over the chart its showing the count is there a way to make it display like this example below? field, count , percentage we want to divide Account , Login , Profile, Online it by Total that we have above         
It is possible to break events on *anything*.  It would help to see a sanitized example of the events you wish to break, but these settings should help. SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\... See more...
It is possible to break events on *anything*.  It would help to see a sanitized example of the events you wish to break, but these settings should help. SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)\d\d:\d\d
@gcusello  Yeah it's odd. Neither of those 2 return any stats results (I checked to make sure I copied the whole query, updated as appropriate for indexes etc.) The original query is only giving 20... See more...
@gcusello  Yeah it's odd. Neither of those 2 return any stats results (I checked to make sure I copied the whole query, updated as appropriate for indexes etc.) The original query is only giving 20 entries under stats (and far less results) which used to work so that's also weird. What we've been doing is something along the lines of this:   index=test OR index=test2 source="insertpath" ErrorCodesResponse=TestError TraceId=* | fields TraceId | append [ search index=test "Test SKU" AND @mt !="TestAsync: Request(Test SKU: )*" | fields TraceId, @t, @mt, RequestPath | where isnotnull('@t') AND isnotnull('@mt') AND match('@mt', "Test SKU: *") ] | eval date=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%Y-%m-%d"), time=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%H:%M") | stats values(date) as date values(time) as time values(@mt) as message values(RequestPath) as Path by TraceId | where isnotnull(date) AND isnotnull(time) AND isnotnull(message) | table date, time, TraceId, message, Path This seems to work better than our old search but I'd prefer to try and figure out yours as it's not using those appended searches.
How to Break a multiple events into a single event based on timestamp? My logs doesn't have a date and it only has timestamp - For Ex - it starts as below format.. 17:22:29.875 Splunk version - ... See more...
How to Break a multiple events into a single event based on timestamp? My logs doesn't have a date and it only has timestamp - For Ex - it starts as below format.. 17:22:29.875 Splunk version - 9.2.1 i have tried many options in props.conf but no luck still i could see multiple events in my search and i couldn't see events are breaked as per each timestamp. will LINE_BREAKER works or BREAK_ONLY_BEFORE - tried both but no luck.. is it possible to break events with timestamp in splunk or it's possible to break events only with date and time ?? Thanks in Advance.
Submit a support request to delete the scheduled searches.  Include the old app name(s).
Some time ago, on Splunk Cloud, I deleted a couple of apps that were used only for testing. These apps had some alerts configured. Now, I see that those test alerts are still running. I found them b... See more...
Some time ago, on Splunk Cloud, I deleted a couple of apps that were used only for testing. These apps had some alerts configured. Now, I see that those test alerts are still running. I found them by searching: index=_internal sourcetype=scheduler app=<deleted app name> However, I can't see these apps in the app list anymore. How can I fix this? Thanks!
How to create custom datalink in Splunk observability cloud for passing filtered values from chart to identify the rootcause of the issue by navigating to APM,RUM,Synthetics page.    
Hi , we have instrumented sql server metrics using OTEL. https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/sqlserverreceiver/documentation.md we have a tempdb , ... See more...
Hi , we have instrumented sql server metrics using OTEL. https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/receiver/sqlserverreceiver/documentation.md we have a tempdb , 1.need to identify space usage , 2.And which query contributes to more tempdb usage using sqlserver receiver OTEL metrics?  
You might be able to do it in Classic SimpleXML dashboards - would this be an option for you?
Hi @yuanliu    In my case i need to search in textbox with dynamic values from message field not with predefined values.
Hi In the App menu. I have a situation where I need to keep installing apps, with different version names. However, when this gets to high numbers it might not look so great (I might be difficult ... See more...
Hi In the App menu. I have a situation where I need to keep installing apps, with different version names. However, when this gets to high numbers it might not look so great (I might be difficult to find the app you need). I have 2 questions - 1st Can I increase the size of the row - When the text wraps around it does not look good (In my image I needed to shorten the name to stop wrapping around) 2nd Can I make a multi-drop-down to the right? Like the image below      
The _internal index collects Splunk's internal (hence the name) events. Generally, the underscore-beginning indexes are internal to Splunk and you can expect the data there to be governed by default ... See more...
The _internal index collects Splunk's internal (hence the name) events. Generally, the underscore-beginning indexes are internal to Splunk and you can expect the data there to be governed by default Splunk settings (you can adjust some of them like retention period but that is not needed for them to work out of the box). Everything else is up to you. We don't know what are your sources, what does your onboarding process look like what are your indexes and how should the data get into them. So the question you stated is not for us - it's for your Splunk admins and architects. They should know what data should be ingested from where and land into which index. They should also know whether you are allowed to have access to that data because not everyone usually has access to every index.
OK. Three more cents on that. When you're searching for a condition like this field="some value" unless it's some special case which we're not gonna be bothered with at this time Splunk firstly se... See more...
OK. Three more cents on that. When you're searching for a condition like this field="some value" unless it's some special case which we're not gonna be bothered with at this time Splunk firstly searches for occurrences of terms "some" and "value" in its indexes, searches which events contain those two words (hopefully there will not be many events matching such criteria) and only those events will get parsed and Splunk will check if the "some value" string parses out in a proper spot within the event. That's quite effective for typical relatively sparse search. It might get less effective in some border cases so then you might help yourself with other means but that's a relatively advanced topic so let's leave it at that. If you're searching for either field!="some value" or NOT field="some value" (you are aware those are not equivalent, right?) Splunk might be able to relatively quickly find all events when neither "some" nor "value" exists because if the search terms don't show up in the event at all they will obviously not match the field extraction but 1) This will only account for the second case NOT field="some value" If we're talking about the field!="some value" condition Splunk still has to parse the event and check if there is any value for the field. And if we have a multivalued field... Here's where it gets even more confusing - your second condition might still match even if one of the values in multivalued field does equal "some value" but there is another one which doesn't. 2) Even if you have both "some" and "value in the event, they still might be in different places within the event so the event as a whole might stil not match our initial condition. So it's way way better to specify searches by means of multiple inclusion conditions which by intersecting narrow our event set which Splunk will finally have to reach raw data for and parse all the fields from than to use general exclusion on a very "wide" basis. Another thing worth knowing which might not be important in your particular case since you're simply using inputlookup within your subsearch which is a very quick command is that subsearches have limits. If your subsearch hits time execution limit (by default it's 60 seconds IIRC) or exceeds the limit for returned results (10k rows; 50k in some specific use cases like join) it is _silently_ finalized and only results obtained so far are returned to the outer search. What is most tricky here is that the subsearch will get finalized _silently_ so you won't be aware that the subsearch didn't get a full result set and you won't be aware that your search including a subsearch might as a whole return incomplete or plain wrong results. So you must be very very careful with subsearches and always make sure that you're not gonna hit those subsearch limits.
@PickleRick : Thank you for the explanations, now I understand what is going on. About inclusion/ exclusion and search efficiency: I was not aware about this; this is something I would need to take ... See more...
@PickleRick : Thank you for the explanations, now I understand what is going on. About inclusion/ exclusion and search efficiency: I was not aware about this; this is something I would need to take care of as well....