source="sampleprepared_data.csv" fraud="1" | stats count values(fraud) by age,merchant | sort - count I have tried this query to aggregate the data by age and merchant and sorted the data in des...
See more...
source="sampleprepared_data.csv" fraud="1" | stats count values(fraud) by age,merchant | sort - count I have tried this query to aggregate the data by age and merchant and sorted the data in descending order, i feel like something is missing, i can't figure out what
1. The question is a bit ambiguous. 2. We don't know your data. Post some (possibly anonymized but I don't think it's necessary in this case). 3. What have you tried so far and how the results diff...
See more...
1. The question is a bit ambiguous. 2. We don't know your data. Post some (possibly anonymized but I don't think it's necessary in this case). 3. What have you tried so far and how the results differ from what you expected?
HI, i am new to Splunk and trying to gain hands-on experience, i am facing trouble to search the data based on this query "Which age group performed the most fraudulent activities and to what merchan...
See more...
HI, i am new to Splunk and trying to gain hands-on experience, i am facing trouble to search the data based on this query "Which age group performed the most fraudulent activities and to what merchant?" can any one help me to figure out the soulution .
What splunk list inputstatus shows on UF? It tells what files it has read and how much. Are you sure that times are correctly picked from files? If there are mismatch between European and USA tim...
See more...
What splunk list inputstatus shows on UF? It tells what files it has read and how much. Are you sure that times are correctly picked from files? If there are mismatch between European and USA time format then you must look those events with some other times as now. When you are porting a new source it's useful to use real time search with known hosts / sources for all time. With that way you can catch wrongly recognized timestamps.
Here is link to CIM (Splunk Common Information Model) https://docs.splunk.com/Documentation/CIM/latest/User/Overview. By following it you can easily utilize create only once dashboard / report etc. a...
See more...
Here is link to CIM (Splunk Common Information Model) https://docs.splunk.com/Documentation/CIM/latest/User/Overview. By following it you can easily utilize create only once dashboard / report etc. and just add a new data sources and then those will be shown there.
Or is there a possibility to use separate index for those events and afterwards even swipe out that content? Anyhow as @PickleRick said a bucket is removed after all events inside it has expired. Us...
See more...
Or is there a possibility to use separate index for those events and afterwards even swipe out that content? Anyhow as @PickleRick said a bucket is removed after all events inside it has expired. Using old and new data (timestamp/_time point of view) usually make this to take quite long time.
Maybe you could utilize that priority attribute with those two sources and use same TRANSFORMS-null attribute with both of those sources? See details from previous doc link.
While monitoring Real User Monitoring, should the performance of the web application deteriorate for any reason, we would like to pause RUM agent and resume the monitoring later on based on the situa...
See more...
While monitoring Real User Monitoring, should the performance of the web application deteriorate for any reason, we would like to pause RUM agent and resume the monitoring later on based on the situation. Request the necessary Splunk RUM agent API reference documentation that provides full list of API methods include pause. resume and other methods
Your question has way too little data to be answered reliably. First and foremost - what kind of data are you trying to ingest? What is the producer of said data? With some solutions it's possible t...
See more...
Your question has way too little data to be answered reliably. First and foremost - what kind of data are you trying to ingest? What is the producer of said data? With some solutions it's possible to extract some standardized fields which can be used to analyze the data instead of plain-text description possibly indluded in further part of the event. But if the source is generating data in language A, the data is in A. For some limited use cases you could try to use static lookups to substitute text in language A for language B but that would be a nightmare to maintain. Using some translation service on search as @BRFZ suggested is certainly possible but would be hugely impractical and could introduce privacy issues when using external services.
That's what Splunk does - it fetches all of the events that meet the search criteria. If you want a single response then put that in the SPL using head 1, tail 1, dedup or something similar.
1. You can't get events directly from evtx files so don't even bother trying But seriously - UF uses system calls to query eventlog channels so no direct reading from the files is involved. 2. Ar...
See more...
1. You can't get events directly from evtx files so don't even bother trying But seriously - UF uses system calls to query eventlog channels so no direct reading from the files is involved. 2. Are you getting _any_ eventlogs from this UF? 3. What user does your splunkd.exe run with? Did you adjust ACLs on the eventlogs? Did you grant the user with proper privileges?
Can't you use time of ingestion as _time (which would influence retention) and use another field for storing your event's time (in this case it could make sense to make it an indexed field). Buckets...
See more...
Can't you use time of ingestion as _time (which would influence retention) and use another field for storing your event's time (in this case it could make sense to make it an indexed field). Buckets are rolled based on either age of data within the bucket (in terms of _time) or index size. That's it.
Hi @dataisbeautiful , instead a single vaue panel, why you don't try with an html box? something like this: <dashboard version="1.1">
<label>Home Page</label>
<row>
<panel>
<html>
<...
See more...
Hi @dataisbeautiful , instead a single vaue panel, why you don't try with an html box? something like this: <dashboard version="1.1">
<label>Home Page</label>
<row>
<panel>
<html>
<h1>IT Infrastructure</h1>
<table border="0" cellpadding="10" align="center">
<th>
<tr>
<td align="center">
<a href="dashboard1">
<img style="width:80px;border:0;" src="/static/app/my_app/Windows_logo.png"/>
</a>
</td>
<td align="center">
<a href="dashboard2">
<img style="width:80px;border:0;" src="/static/app/my_app/Linux_logo.png"/>
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="/app/my_app/dashboard1">
Windows
</a>
</td>
<td align="center">
<a href="/app/my_app/dashboard2">
Linux
</a>
</td>
</tr>
</th>
</table>
</html>
</panel>
</dashboard> to adapt to your dashboards. Ciao. Giuseppe
Hi @ques_splunk , as you can read at https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity and as @PickleRick said, all the indexes for ES arwe contained in a TA that you ...
See more...
Hi @ques_splunk , as you can read at https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity and as @PickleRick said, all the indexes for ES arwe contained in a TA that you can download from the configre menu. Then you have to install this add-on on the Indexers or on the same machine depending on your architecture. Ciao. Giuseppe
Your explanation is a little confusing as people already pointed out. What does "server's backend" mean in this context? You probably mean that you can access the machine on which the HF is running a...
See more...
Your explanation is a little confusing as people already pointed out. What does "server's backend" mean in this context? You probably mean that you can access the machine on which the HF is running and log in to either shell session or local/remote desktop session depending on what OS type we're talking about. These are completely separate credentials from Splunk's own authentication. That's first thing. Secondly, you're saying that you use LDAP-based authentication. That might be true but usually external authentication methods are only used on SH-tier. Normal users don't typically access other environment components so other access than built-in admin account is usually not needed.
I understand what drove Splunk to prepare this page but this is best avoided. It encourages users to use some anti-patterns which are not and should not normally be used in Splunk. Splunk is very di...
See more...
I understand what drove Splunk to prepare this page but this is best avoided. It encourages users to use some anti-patterns which are not and should not normally be used in Splunk. Splunk is very different from RDBMS so it needs another "way of thinking". I find it easier to compare Splunk search to processing data with unix shell (I also suspect that choice of the pipe sign to delimit the steps in the pipeline is not accidental ). And as a rule of thumb the join command should typically not be used with Splunk. (yes, there are use cases for it so it's there but it's not as common as in SQL). I don't know what you mean by "multicolumn key" in this context but you can either use stats with multiple by fields or - if you mean it the opposite way - you can create a synthetic field to split by. Like | eval splitfield=field1."-".field2."-".field3 | stats count by splitfield Just watch for cardinality... EDIT: Oh, I didn't see your SQL example. So you can make such syntetic fields from both kinds of data (possibly using conditional eval to calculate them separately for each subset). And then stats by those fields.
1. Has the UF been restarted? 2. Look for _internal events from that UF regarding monitored files. 3. Did you verify your resulting config with btool? 4. SELinux