All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here is link to CIM (Splunk Common Information Model) https://docs.splunk.com/Documentation/CIM/latest/User/Overview. By following it you can easily utilize create only once dashboard / report etc. a... See more...
Here is link to CIM (Splunk Common Information Model) https://docs.splunk.com/Documentation/CIM/latest/User/Overview. By following it you can easily utilize create only once dashboard / report etc. and just add a new data sources and then those will be shown there.
Or is there a possibility to use separate index for those events and afterwards even swipe out that content? Anyhow as @PickleRick said a bucket is removed after all events inside it has expired. Us... See more...
Or is there a possibility to use separate index for those events and afterwards even swipe out that content? Anyhow as @PickleRick said a bucket is removed after all events inside it has expired. Using old and new data (timestamp/_time point of view) usually make this to take quite long time.
Maybe you could utilize that priority attribute with those two sources and use same TRANSFORMS-null attribute with both of those sources? See details from previous doc link.
While monitoring Real User Monitoring, should the performance of the web application deteriorate for any reason, we would like to pause RUM agent and resume the monitoring later on based on the situa... See more...
While monitoring Real User Monitoring, should the performance of the web application deteriorate for any reason, we would like to pause RUM agent and resume the monitoring later on based on the situation. Request the necessary Splunk RUM agent API reference documentation that provides full list of API methods include pause. resume and other methods
Your question has way too little data to be answered reliably. First and foremost - what kind of data are you trying to ingest? What is the producer of said data? With some solutions it's possible t... See more...
Your question has way too little data to be answered reliably. First and foremost - what kind of data are you trying to ingest? What is the producer of said data? With some solutions it's possible to extract some standardized fields which can be used to analyze the data instead of plain-text description possibly indluded in further part of the event. But if the source is generating data in language A, the data is in A. For some limited use cases you could try to use static lookups to substitute text in language A for language B but that would be a nightmare to maintain. Using some translation service on search as @BRFZ suggested is certainly possible but would be hugely impractical and could introduce privacy issues when using external services.
That's a bit more complicated than that. See https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/  
That's what Splunk does - it fetches all of the events that meet the search criteria.  If you want a single response then put that in the SPL using head 1, tail 1, dedup or something similar.
1. You can't get events directly from evtx files so don't even bother trying But seriously - UF uses system calls to query eventlog channels so no direct reading from the files is involved. 2. Ar... See more...
1. You can't get events directly from evtx files so don't even bother trying But seriously - UF uses system calls to query eventlog channels so no direct reading from the files is involved. 2. Are you getting _any_ eventlogs from this UF? 3. What user does your splunkd.exe run with? Did you adjust ACLs on the eventlogs? Did you grant the user with proper privileges?
Can't you use time of ingestion as _time (which would influence retention) and use another field for storing your event's time (in this case it could make sense to make it an indexed field). Buckets... See more...
Can't you use time of ingestion as _time (which would influence retention) and use another field for storing your event's time (in this case it could make sense to make it an indexed field). Buckets are rolled based on either age of data within the bucket (in terms of _time) or index size. That's it.
Hi @dataisbeautiful , instead a single vaue panel, why you don't try with an html box? something like this: <dashboard version="1.1"> <label>Home Page</label> <row> <panel> <html> <... See more...
Hi @dataisbeautiful , instead a single vaue panel, why you don't try with an html box? something like this: <dashboard version="1.1"> <label>Home Page</label> <row> <panel> <html> <h1>IT Infrastructure</h1> <table border="0" cellpadding="10" align="center"> <th> <tr> <td align="center"> <a href="dashboard1"> <img style="width:80px;border:0;" src="/static/app/my_app/Windows_logo.png"/> </a> </td> <td align="center"> <a href="dashboard2"> <img style="width:80px;border:0;" src="/static/app/my_app/Linux_logo.png"/> </a> </td> </tr> <tr> <td align="center"> <a href="/app/my_app/dashboard1"> Windows </a> </td> <td align="center"> <a href="/app/my_app/dashboard2"> Linux </a> </td> </tr> </th> </table> </html> </panel> </dashboard> to adapt to your dashboards. Ciao. Giuseppe
Hi @AliMaher , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated also by the others contributors
Hi @ques_splunk , as you can read at https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity and as @PickleRick said, all the indexes for ES arwe contained in a TA that you ... See more...
Hi @ques_splunk , as you can read at https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity and as @PickleRick said, all the indexes for ES arwe contained in a TA that you can download from the configre menu. Then you have to install this add-on on the Indexers or on the same machine depending on your architecture. Ciao. Giuseppe
Your explanation is a little confusing as people already pointed out. What does "server's backend" mean in this context? You probably mean that you can access the machine on which the HF is running a... See more...
Your explanation is a little confusing as people already pointed out. What does "server's backend" mean in this context? You probably mean that you can access the machine on which the HF is running and log in to either shell session or local/remote desktop session depending on what OS type we're talking about. These are completely separate credentials from Splunk's own authentication. That's first thing. Secondly, you're saying that you use LDAP-based authentication. That might be true but usually external authentication methods are only used on SH-tier. Normal users don't typically access other environment components so other access than built-in admin account is usually not needed.
I understand what drove Splunk to prepare this page but this is best avoided. It encourages users to use some anti-patterns which are not and should not normally be used in Splunk. Splunk is very di... See more...
I understand what drove Splunk to prepare this page but this is best avoided. It encourages users to use some anti-patterns which are not and should not normally be used in Splunk. Splunk is very different from RDBMS so it needs another "way of thinking". I find it easier to compare Splunk search to processing data with unix shell (I also suspect that choice of the pipe sign to delimit the steps in the pipeline is not accidental ). And as a rule of thumb the join command should typically not be used with Splunk. (yes, there are use cases for it so it's there but it's not as common as in SQL). I don't know what you mean by "multicolumn key" in this context but you can either use stats with multiple by fields or - if you mean it the opposite way - you can create a synthetic field to split by. Like | eval splitfield=field1."-".field2."-".field3 | stats count by splitfield Just watch for cardinality... EDIT: Oh, I didn't see your SQL example. So you can make such syntetic fields from both kinds of data (possibly using conditional eval to calculate them separately for each subset). And then stats by those fields.
1. Has the UF been restarted? 2. Look for _internal events from that UF regarding monitored files. 3. Did you verify your resulting config with btool? 4. SELinux
One more thing because that's often overlooked when talking about DMs. DMs as such don't accelerate anything. DMs are just an intermediate layer of logic making Splunk able to search different types... See more...
One more thing because that's often overlooked when talking about DMs. DMs as such don't accelerate anything. DMs are just an intermediate layer of logic making Splunk able to search different types of data in the same way so when you search from DM using DM fields constraints, Splunk "underneath" transforms your search into raw data search and lets you search possibly multiple separate indexes and sourcetypes without even knowing the real structure of the underlying data. DM _acceleration_ however is a completely different beast. It's the machinery that's running under Splunk's hood and prepares this database of indexed datamodel contents so that you can search using those pre-built summaries instead of digging through the raw data itself. So while the DMA requires properly ingested and configured data normalized for DMs, it's this "one step beyond" that gives you performance benefits. If you just have DMs which are not accelerated you might be able to search your data easier (and create pivots) but it will not give you any performance gains. It's the DM acceleration that makes Splunk go zzzzooooooom.
What is your environment architecture? All-in-one? Separate simple indexer tier? Clustered indexer tier? If your indexers are a separate tier, have you deployed the "for-indexer" TA?
If I understand your problem correctly - you have two fields (sensor1 and sensor2) which contain your data points but you have also a "classifying" field host effectively giving you four separate dat... See more...
If I understand your problem correctly - you have two fields (sensor1 and sensor2) which contain your data points but you have also a "classifying" field host effectively giving you four separate data series, right? And you want to get four separate fields from that to be able to do four dinstinct aggregations for your timechart. Well, there might be several different possible approaches to this. One is to just use a set of conditional evals to create synthetic fields from your data as @yuanliu showed. The downside to this method is that it can be tedious to write all those evals and keep track of them, especially if your data is more complicated than just two sensors and two hosts. Another one is to use the {} notation to dynamically create field names. A run-anywhere example (not really timecharting much due to just a few input values but showing the idea) | makeresults format=csv data="_time,sensor1,sensor2,host 1,1,2,host1 1,2,3,host2 2,4,5,host1 2,5,6,host2" | eval {host}sensor1=sensor1 | eval {host}sensor2=sensor2 | fields - sensor1 sensor2 | timechart avg(host*sensor*) as ** This is easier to maintain because it's happening automagically but the downside is that you have much less control over resulting field names (of course you can rename them manually but that's when we again step into the field of manual fiddling with your data).
I suppose it makes sense with moving time window Splunk has to keep track of the window and events fitting that window. If you don't include the current event Splunk doesn't know how many previous ev... See more...
I suppose it makes sense with moving time window Splunk has to keep track of the window and events fitting that window. If you don't include the current event Splunk doesn't know how many previous events it has to keep and include in your calculations. If you have a fixed window expressed in number of events - that's easy - Splunk has always remember last n events to calculate your aggregation. But in case of time window it would make Splunk have to remember much more events that are used to calculate the stats in case they don't "fall out" of the window in case of next event. So it's simply easiest to forbid using use_current=f probably. I suppose you could do some ugly hacks like streamstatsing lists of values and manually calculating your aggregations but that would probably be horribly inefficient. I must say that I don't see a use case. What that would even mean "5-minute window without current event" - A window of 5 minutes looking back from the previous remembered event? Or a window of 5 minutes looking back from current event but without using current event's value? In the latter case you could simply do a "half automatic" calculations - for example with an average, you could just streamstats sum and count, then substract the current event's value from the sum and use count (or count-1) to calculate average. For more sophisticated aggregations of course you'd need to be a bit more creative. But the former case - it doesn't differ from use_current=t if you're just aggregating from previous event backwards. Maybe there's something more to this case you're not telling us and it can be done in yet another way.
i want the error messgae from "faultstring" to be displayed in my results: </soap:Envelope>", RESPONSE="<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header></soap:... See more...
i want the error messgae from "faultstring" to be displayed in my results: </soap:Envelope>", RESPONSE="<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header></soap:Header> <soap:Body> <soap:Fault> <faultcode>soap:Server</faultcode> <faultstring>APPL0014: IO Exception: Read timed out java.net.SocketTimeoutException: Read timed out</faultstring> </soap:Fault> </soap:Body>   My splunk query is below: index="abc" source="xyz" OPERATION = "getOrderService" |rex "RESPONSE=\\\"(?<RESPONSE>.+)" |spath input=RESPONSE |spath input=RESPONSE output=faultstring path=soapenv:Envelope.soap:Header.soapenv:Body.soapenv:Fault.faultcode.faultstring   instead of fetching only one response with faultstring, its fetching all the results from the responses