All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I use the command ./splunk add monitor /var/log, -> /splunk/etc/apps/search/local/inputs.conf file will be modified. However, if I use the command ./splunk add forward-server a.a.a.a:9997, -> /... See more...
If I use the command ./splunk add monitor /var/log, -> /splunk/etc/apps/search/local/inputs.conf file will be modified. However, if I use the command ./splunk add forward-server a.a.a.a:9997, -> /splunk/etc/system/local/outputs.conf is modified.   Why are both the same cli tasks, but one modifies the file under the search app and the other modifies the system file? Even considering the priority of the conf configuration file, both are GLOBAL CONTEXT, so I think they should both be placed under the System folder.   My question may be inappropriate or may have some shortcomings. I would really appreciate your advice.
If your problem is resolved, then please click an "Accept as Solution" button to help future readers.
This is perfect. Thank you!
Thank you very much!!! 
A search like this I think will do it. (Set your search window to be last 7 days)       index=<origin_index> Operation="FileUploadedToCloud" earliest=-7d latest=now | stats count ... See more...
A search like this I think will do it. (Set your search window to be last 7 days)       index=<origin_index> Operation="FileUploadedToCloud" earliest=-7d latest=now | stats count as upload_count, earliest(_time) as earliest_upload_epoch, latest(_time) as latest_upload_epoch by user, targetdomain | sort 0 -upload_count | stats dc(user) as dc_user, list(user) as users, min(earliest_upload_epoch) as earliest_upload_epoch, max(latest_upload_epoch) as latest_upload_epoch, list(upload_count) as upload_count, sum(upload_count) as total_upload_count by targetdomain ``` filter down results to only include domains where 1 user has uploaded in the specified search window ``` | where 'dc_user'==1 | convert ctime(earliest_upload_epoch) as earliest_upload_timestamp, ctime(latest_upload_epoch) as latest_upload_timestamp | fields - *_upload_epoch         This should also return results if a single user has uploaded to a domain multiple times but is still the only user to upload to it in the last 7 days. Resulting dataset would look something like this. If scope needs to be narrowed to only 1 upload event by a user then you can additional filter to only return the events where 'total_upload_count'==1
Search over the last 7 days and count entries by target domain.  Filter out anything with a count greater than 1. index=foo Operation=FileUploadedToCloud user=* targetdomain=* earliest=-7d | stats c... See more...
Search over the last 7 days and count entries by target domain.  Filter out anything with a count greater than 1. index=foo Operation=FileUploadedToCloud user=* targetdomain=* earliest=-7d | stats count, values(*) as * by targetdomain | where count=1
Hi all, I am trying to put together a search and stats table for users in our environment who have uploaded data to a domain where there has been not been any other upload activity to that domain in... See more...
Hi all, I am trying to put together a search and stats table for users in our environment who have uploaded data to a domain where there has been not been any other upload activity to that domain in the last 7 days. Operation="FileUploadedToCloud" - I'm working with fields such as user and targetdomain. Any help is appreciated! Thanks!
Thanks!!! That is EXACTLY what I was trying to do and just was not getting.  The solution makes complete sense and is cleaner than I expected.  Thanks for taking the time to give all the information/... See more...
Thanks!!! That is EXACTLY what I was trying to do and just was not getting.  The solution makes complete sense and is cleaner than I expected.  Thanks for taking the time to give all the information/help!
Generaly yes, If we were to just copypasta the correct solution here then it may have a hard time sticking for them. I have found that learning by worked examples with detailed explanation for doing... See more...
Generaly yes, If we were to just copypasta the correct solution here then it may have a hard time sticking for them. I have found that learning by worked examples with detailed explanation for doing things a certain way tends to stick well. (Provided that the questioner is actually interested in learning and improving their Splunk skills) So hopefully the OP can takeaway some additional knowledge that can be applied elsewhere on their Splunk journey. Also at the same time, they do not have to stress about meeting job deadlines trying to figure out the nudges in the correct direction. I'm new here on the forums and don't quite know the etiquette yet. Just trying to spread Splunk knowledge in a manner that I feel would be the most beneficial for me if I were posting a question here. Happy Splunking!
Hi, You need to install Java JRE 1.8+ before installing Akamai Splunk Connector. For example, use "yum install java-1.8.0-openjdk" if your HF is based on Linux CentOS. Regards.
If it's just that host that is affected then verify the input for that file is present on the host and not disabled.  Make sure Splunk still has read access to the file.  Check splunkd.log on the hos... See more...
If it's just that host that is affected then verify the input for that file is present on the host and not disabled.  Make sure Splunk still has read access to the file.  Check splunkd.log on the host for any messages that might explain the problem.
Hi all, I've setup am SC4S just to forward nix:syslog events. In local/context/splunk_metadata.csv: nix_syslog,index,the_index nix_syslog,sourcetype,nix:syslog Cant find the events inSplunk and ... See more...
Hi all, I've setup am SC4S just to forward nix:syslog events. In local/context/splunk_metadata.csv: nix_syslog,index,the_index nix_syslog,sourcetype,nix:syslog Cant find the events inSplunk and splunkd.log is filling with: 12-29-2023 09:52:50.993 +0000 ERROR HttpInputDataHandler [2140 HttpDedicatedIoThread-0] - Failed processing http input, token name=the_token, channel=n/a, source_IP=172.18.0.1, reply=7, events_processed=1, http_input_body_size=1091, parsing_err="Incorrect index, index='main'" The HEC probes at sc4s boot are successful and inserted in the correct index. Any help would be really appreciated. Thank you Daniel
Hi I prefer to use some naming schema for all KOs in splunk. In that way you could point any KO to affect only logs which you want. You never should use generic names like access_log, service etc. A... See more...
Hi I prefer to use some naming schema for all KOs in splunk. In that way you could point any KO to affect only logs which you want. You never should use generic names like access_log, service etc. Always use like my:app1:access_log etc. There are some docs and other examples how you could define your own naming schema. And you could change / extend this later when it's needed. r. Ismo
Hi you should tell more about your situation like your environment have those come earlier are you only one who didn't see those what has changed Without this kind of base information it's qu... See more...
Hi you should tell more about your situation like your environment have those come earlier are you only one who didn't see those what has changed Without this kind of base information it's quite frustrating to guess what the reason could be! There are also quite many similar issues already solved in community. Just try to use google/bing/what ever your search engine is, to see how these are normally solved. r. Ismo  
@dtburrows3 already shows you how you could combine those together. 1st count total with event stats and then calculate and present with chart. Usually these will remember better when you need to l... See more...
@dtburrows3 already shows you how you could combine those together. 1st count total with event stats and then calculate and present with chart. Usually these will remember better when you need to learn those by yourself without just getting the correct answer  
Hello, I am also facing same issue , can any one suggest on this?
@richgalloway  , i do have access for the index=abc don't know why data is not coming into that host , while checking in backend able to see logs coming on daily basis , but it is not ingesting in in... See more...
@richgalloway  , i do have access for the index=abc don't know why data is not coming into that host , while checking in backend able to see logs coming on daily basis , but it is not ingesting in index=abc .   While in backend am able to follow this path /home/sv_cidm/files and able to see logs  what should I do know , please help your help will be appreciated .   Thanks
Hi @beepbop, what do you mean with "scheduled index time", are you speaking of the timestamp (that's recorded in the _time field)? If this is your requirement, you can use the timestamp recorded in... See more...
Hi @beepbop, what do you mean with "scheduled index time", are you speaking of the timestamp (that's recorded in the _time field)? If this is your requirement, you can use the timestamp recorded in the event or the timestamp of when the event is indexed. If you want the event timestamp, Splunk tries to recognize the timestamp, otherwise (e.g. when there are more timestamps in the event) you have to teach Splunk to identify the timestamp using two parameters in props.conf (TIME_PREFIX and TIME_FORMAT). If there isn't any timestamp in the event you can use the timestamp of when the event is indexed or the timestamp of the previous indexed timestamp (default). Ciao. Giuseppe
Hi Team, I have developed .NET sample MSMQ sender and receiver Console application. I have tried Instrumenting that application. I could load the profiler and was able to see the MQ Details and tr... See more...
Hi Team, I have developed .NET sample MSMQ sender and receiver Console application. I have tried Instrumenting that application. I could load the profiler and was able to see the MQ Details and transaction snapshots for sender application, but was unable to get MQ details for receiver application in AppDynamics controller. But we are expecting MSMQ Entry point for .NET consumer application. I have tried resolving the issue by adding POCO entry points which AppDynamics has been mentioned in the below link, but it didn’t help. Message Queue Entry Points (appdynamics.com) Please look into this issue and help us to resolve this. Thanks in advance.
hi, how can I change the scheduled index time of a data source?