All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Lookup is just one type of knowledge object.  Field extractions, transforms, calculated fields, event types, tags, etc., etc., can all have limited permissions if any of your subsearches use those.  ... See more...
Lookup is just one type of knowledge object.  Field extractions, transforms, calculated fields, event types, tags, etc., etc., can all have limited permissions if any of your subsearches use those.  For example, you think a field is available to you, and it appears to be available to you in search window because you are the owner of that private extraction.  But the field may not be available when another user runs the dashboard.  Again, this is just another example.
Many thanks! Yes, it is what I want, your answer is very helpful! Many thanks!
As a newbie I am currently working on a mini internship project which requires me to analyse a dataset using splunk. I have completed almost all but the last part of it which reads  "gender that perf... See more...
As a newbie I am currently working on a mini internship project which requires me to analyse a dataset using splunk. I have completed almost all but the last part of it which reads  "gender that performed the most fraudulent activities and in what category". Basically im supposed to get the gender (F or M) that performed the most fraud in specifically in what category. The dataset which consists of a column of  steps, customer, age,gender, Postcodeorigin, merchant, category,amount and fround from a file name fraud_report.csv . The file has already been uploaded to splunk.  I am just stuck at the query part.
Hi, @yuanliu , the macro is shared in app, and i don't use any lookup files in the macro. I use join in the macro to get the data from 3 different source types. Is the join causing the issue?
When you define lookup, did you set match type to CIDR? This is in Advanced options.    
By "make the result has 3 columns," do you mean that when logs only come from less than 3 servers, you still want to display the one with no logs (with value 0)? In that case, you must know the exac... See more...
By "make the result has 3 columns," do you mean that when logs only come from less than 3 servers, you still want to display the one with no logs (with value 0)? In that case, you must know the exact name of the three servers.  Then, use foreach to fill the values. index=* AND appid=127881 AND message="*|NGINX|*" AND cluster != null AND namespace != null | eval server = (namespace + "@" + cluster) | timechart span=1d count by server | foreach "127881-p@23p", "127881-p@24p", "127881-p@25p" [eval <<FIELD>> = if(isnull('<<FIELD>>'), 0, '<<FIELD>>')]  
The subject is too generic without knowing what the macro consists of.  But if there is no obvious error messages, the problem could be in permissions of knowledge objects (lookups, extractions/trans... See more...
The subject is too generic without knowing what the macro consists of.  But if there is no obvious error messages, the problem could be in permissions of knowledge objects (lookups, extractions/transforms, calculated fields, etc.) used in the macro. First, of course, check if the macro itself is shared in the app where the dashboard runs.  Then, is there any lookup used in the macro that is not shared with this app?  And so on, and so forth.
You can go into token management to find out which this token belongs to, then go into permissions and find out what permissions the user has. To think, every user who can launch a search should be ... See more...
You can go into token management to find out which this token belongs to, then go into permissions and find out what permissions the user has. To think, every user who can launch a search should be allowed to use /services/search/jobs endpoint.  So, that is highly abnormal.  Maybe first test that user in UI to see if it can launch job manager menu.  Meanwhile, a trivial user should not be allowed to see another user's search, so denying /services/search/jobs/<searchid> can be the result of "otherness". Also, it is not clear what exactly context defines "sometimes".  If the behavior is inconsistent over time using the same token on the same endpoint, maybe it's time to call support.
Hello,  Based on this Splunk Query:   index=* AND appid=127881 AND message="*|NGINX|*" AND cluster != null AND namespace != null | eval server = (namespace + "@" + cluster) | timechart span=1... See more...
Hello,  Based on this Splunk Query:   index=* AND appid=127881 AND message="*|NGINX|*" AND cluster != null AND namespace != null | eval server = (namespace + "@" + cluster) | timechart span=1d count by server Because the logs are only kept for 1 month, and in recent month, logs are only in server 127881-p@23p. So in the splunk query result, we only can see 1 column: 127881-p@23p   May I ask how to make the result has 3 columns: 127881-p@23p, 127881-p@24p, 127881-p@25p And since there is no logs in 24p and 25p rencently, the values for 24p and 25p are 0.   Thanks a lot!  
Hi All, I have written a macro to get a field. It has 3 joins. When i used the macro in dashboard , in a base search, it is not working properly and gives very less results. But when i use the macr... See more...
Hi All, I have written a macro to get a field. It has 3 joins. When i used the macro in dashboard , in a base search, it is not working properly and gives very less results. But when i use the macro in search bar it gives correct results. Does anyone know how can i solve this?
Hello, In my Splunk web service, we have the domain, for example: https://splunksh.com  The problems is anyone can get access to https://splunksh.com/config without login. Although the page doesn't... See more...
Hello, In my Splunk web service, we have the domain, for example: https://splunksh.com  The problems is anyone can get access to https://splunksh.com/config without login. Although the page doesn't contain any sensitive data, our Cyber Security team deem it as a vulnability that need to be fix. I want to know how to either disable that url, or redirect it to the login page. Any help would be very apriciate. 
Hello everyone, I have collected some firewall traffic data: two firewalls(fw1/fw2), each has two interfaces(ethernet1/1&2),  will collect rxbytes and txbytes every 5 minutes.  The raw data is sh... See more...
Hello everyone, I have collected some firewall traffic data: two firewalls(fw1/fw2), each has two interfaces(ethernet1/1&2),  will collect rxbytes and txbytes every 5 minutes.  The raw data is showed as below: >>> {"timestamp": 1726668551, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59947791867743, "txbytes": 37019023811192} {"timestamp": 1726668551, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63755935850903, "txbytes": 32252936430552} {"timestamp": 1726668551, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668551, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668851, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59948210937804, "txbytes": 37019791801583} {"timestamp": 1726668851, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63755965708078, "txbytes": 32253021060643} {"timestamp": 1726668851, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726668851, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669151, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59948636904106, "txbytes": 37020560028933} {"timestamp": 1726669151, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756002542165, "txbytes": 32253111011234} {"timestamp": 1726669151, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669151, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669451, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949094737896, "txbytes": 37021330717977} {"timestamp": 1726669451, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756101313559, "txbytes": 32253199085252} {"timestamp": 1726669451, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669451, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669752, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949550987330, "txbytes": 37022105630147} {"timestamp": 1726669752, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756167141302, "txbytes": 32253286546113} {"timestamp": 1726669752, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726669752, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726670052, "fwname": "fw1", "interface": "ethernet1/1", "rxbytes": 59949968397016, "txbytes": 37022870539739} {"timestamp": 1726670052, "fwname": "fw1", "interface": "ethernet1/2", "rxbytes": 63756401499253, "txbytes": 32253380028970} {"timestamp": 1726670052, "fwname": "fw2", "interface": "ethernet1/1", "rxbytes": 0, "txbytes": 0} {"timestamp": 1726670052, "fwname": "fw2", "interface": "ethernet1/2", "rxbytes": 0, "txbytes": 0} <<< Now I need to create one chart to show the value of "rxbytes" over time, with 4 series: (series 1) fw1, interface1/1 (series 2) fw1, interface1/2 (series 3) fw2, interface1/1 (series 4) fw2, interface1/2 But I have problem to compose the SPL statement for this purpose. can you please help here? thank you in advance!
I want to count user_ids that appear more than once per month. (ie a user that has used the product multiple times).  I've tried a few variations such as : search XXX | dedup XXX | stats count by u... See more...
I want to count user_ids that appear more than once per month. (ie a user that has used the product multiple times).  I've tried a few variations such as : search XXX | dedup XXX | stats count by user_id | where count >1 but can't seem to get it to work. Hoping to be able to display the count as a single number as well as timechart it so I can show the number over the last X months.. Any suggestions? It feels like it should've been easier than it has been!
Hello  I have a requirement to show one of my Splunk Cloud dashboards embedded on SharePoint on-prem page. Trying to use iframe for that purpose but get an error "Connection Refused". Any ideas... See more...
Hello  I have a requirement to show one of my Splunk Cloud dashboards embedded on SharePoint on-prem page. Trying to use iframe for that purpose but get an error "Connection Refused". Any ideas, or anyone has tried this?
So there is no way to customize these letters/abbreviations?
Let's add some additional stuff to the mix. 1. Raw number of events is one thing, but their size also matters. It's a different thing to send 1000 of short syslog-received messages and completely an... See more...
Let's add some additional stuff to the mix. 1. Raw number of events is one thing, but their size also matters. It's a different thing to send 1000 of short syslog-received messages and completely another thing to send 1000 several kilobytes long stack dumps from java app. 2. Technically at some point you will hit some limit (after all server memory doesn't grow on trees ;-). But probably sending tenths of iso images within a single batch request isn't what you're aiming at 3. And finally, even if you're not using acks, will your source be able to resend the events batch from a given point should any error happen in the middle of the batch and only some events were accepted?
That's understandable. Your files consist mostly of a relatively constant part repeated across all files (the header and some relatively constant fields) so Splunk will be guessing that it's all the ... See more...
That's understandable. Your files consist mostly of a relatively constant part repeated across all files (the header and some relatively constant fields) so Splunk will be guessing that it's all the same file. If the filenames are unique and the files are not rotated in any way, you can use crcSalt=<SOURCE> (That's actually one of the rare cases it can actually make sense). Otherwise, raise initCrcLength so that it catches variable parts of the event. As a side note, it seems that the event is very verbose and could use some serious editing on ingest to save on license (you don't need majority of the raw data). Additional questin is whether there should be any event breaking done within a single fioe.
I've never done this myself (usually you grow from a stand-alone instance to clustered environment) but there is no real reason why one of your indexers shouldn't work as a stand-alone machine. Of co... See more...
I've never done this myself (usually you grow from a stand-alone instance to clustered environment) but there is no real reason why one of your indexers shouldn't work as a stand-alone machine. Of course you know how to remove one indexer from the cluster (I hope you don't have rf=sf=1). If you have rf=2, sf=1 and relatively symmetrical primaries distribution, you might  need extra storage since Splunk will have to rebuild index files from raw data on the remaining indexer. If you have rf=sf=2, you'll just get one indexer down and that's it. One caveat - since your rf/sf will not be met with just one indexer, your cluster will be searchable but not complete since you'll always be missing the other indexer.
Actually ingesting via an S3 bucket is a fairly unusual scenario. Start easy - by deploying a UF on a windows box and reading its eventlog channels. Then try ingesting data with file monitor inputs. ... See more...
Actually ingesting via an S3 bucket is a fairly unusual scenario. Start easy - by deploying a UF on a windows box and reading its eventlog channels. Then try ingesting data with file monitor inputs. Then you can try installing some apps with modular inputs and configuring them. And actually, adding data is not really much of a cybersecurity task. It's more of an admin chore.
We can only see that the server is throwing a 500 error. We can't tell why. There should be something more in the logs. Check out _internal to see what's going on.