All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, I am trying to put together a search and stats table for users in our environment who have uploaded data to a domain where there has been not been any other upload activity to that domain in... See more...
Hi all, I am trying to put together a search and stats table for users in our environment who have uploaded data to a domain where there has been not been any other upload activity to that domain in the last 7 days. Operation="FileUploadedToCloud" - I'm working with fields such as user and targetdomain. Any help is appreciated! Thanks!
Thanks!!! That is EXACTLY what I was trying to do and just was not getting.  The solution makes complete sense and is cleaner than I expected.  Thanks for taking the time to give all the information/... See more...
Thanks!!! That is EXACTLY what I was trying to do and just was not getting.  The solution makes complete sense and is cleaner than I expected.  Thanks for taking the time to give all the information/help!
Generaly yes, If we were to just copypasta the correct solution here then it may have a hard time sticking for them. I have found that learning by worked examples with detailed explanation for doing... See more...
Generaly yes, If we were to just copypasta the correct solution here then it may have a hard time sticking for them. I have found that learning by worked examples with detailed explanation for doing things a certain way tends to stick well. (Provided that the questioner is actually interested in learning and improving their Splunk skills) So hopefully the OP can takeaway some additional knowledge that can be applied elsewhere on their Splunk journey. Also at the same time, they do not have to stress about meeting job deadlines trying to figure out the nudges in the correct direction. I'm new here on the forums and don't quite know the etiquette yet. Just trying to spread Splunk knowledge in a manner that I feel would be the most beneficial for me if I were posting a question here. Happy Splunking!
Hi, You need to install Java JRE 1.8+ before installing Akamai Splunk Connector. For example, use "yum install java-1.8.0-openjdk" if your HF is based on Linux CentOS. Regards.
If it's just that host that is affected then verify the input for that file is present on the host and not disabled.  Make sure Splunk still has read access to the file.  Check splunkd.log on the hos... See more...
If it's just that host that is affected then verify the input for that file is present on the host and not disabled.  Make sure Splunk still has read access to the file.  Check splunkd.log on the host for any messages that might explain the problem.
Hi all, I've setup am SC4S just to forward nix:syslog events. In local/context/splunk_metadata.csv: nix_syslog,index,the_index nix_syslog,sourcetype,nix:syslog Cant find the events inSplunk and ... See more...
Hi all, I've setup am SC4S just to forward nix:syslog events. In local/context/splunk_metadata.csv: nix_syslog,index,the_index nix_syslog,sourcetype,nix:syslog Cant find the events inSplunk and splunkd.log is filling with: 12-29-2023 09:52:50.993 +0000 ERROR HttpInputDataHandler [2140 HttpDedicatedIoThread-0] - Failed processing http input, token name=the_token, channel=n/a, source_IP=172.18.0.1, reply=7, events_processed=1, http_input_body_size=1091, parsing_err="Incorrect index, index='main'" The HEC probes at sc4s boot are successful and inserted in the correct index. Any help would be really appreciated. Thank you Daniel
Hi I prefer to use some naming schema for all KOs in splunk. In that way you could point any KO to affect only logs which you want. You never should use generic names like access_log, service etc. A... See more...
Hi I prefer to use some naming schema for all KOs in splunk. In that way you could point any KO to affect only logs which you want. You never should use generic names like access_log, service etc. Always use like my:app1:access_log etc. There are some docs and other examples how you could define your own naming schema. And you could change / extend this later when it's needed. r. Ismo
Hi you should tell more about your situation like your environment have those come earlier are you only one who didn't see those what has changed Without this kind of base information it's qu... See more...
Hi you should tell more about your situation like your environment have those come earlier are you only one who didn't see those what has changed Without this kind of base information it's quite frustrating to guess what the reason could be! There are also quite many similar issues already solved in community. Just try to use google/bing/what ever your search engine is, to see how these are normally solved. r. Ismo  
@dtburrows3 already shows you how you could combine those together. 1st count total with event stats and then calculate and present with chart. Usually these will remember better when you need to l... See more...
@dtburrows3 already shows you how you could combine those together. 1st count total with event stats and then calculate and present with chart. Usually these will remember better when you need to learn those by yourself without just getting the correct answer  
Hello, I am also facing same issue , can any one suggest on this?
@richgalloway  , i do have access for the index=abc don't know why data is not coming into that host , while checking in backend able to see logs coming on daily basis , but it is not ingesting in in... See more...
@richgalloway  , i do have access for the index=abc don't know why data is not coming into that host , while checking in backend able to see logs coming on daily basis , but it is not ingesting in index=abc .   While in backend am able to follow this path /home/sv_cidm/files and able to see logs  what should I do know , please help your help will be appreciated .   Thanks
Hi @beepbop, what do you mean with "scheduled index time", are you speaking of the timestamp (that's recorded in the _time field)? If this is your requirement, you can use the timestamp recorded in... See more...
Hi @beepbop, what do you mean with "scheduled index time", are you speaking of the timestamp (that's recorded in the _time field)? If this is your requirement, you can use the timestamp recorded in the event or the timestamp of when the event is indexed. If you want the event timestamp, Splunk tries to recognize the timestamp, otherwise (e.g. when there are more timestamps in the event) you have to teach Splunk to identify the timestamp using two parameters in props.conf (TIME_PREFIX and TIME_FORMAT). If there isn't any timestamp in the event you can use the timestamp of when the event is indexed or the timestamp of the previous indexed timestamp (default). Ciao. Giuseppe
Hi Team, I have developed .NET sample MSMQ sender and receiver Console application. I have tried Instrumenting that application. I could load the profiler and was able to see the MQ Details and tr... See more...
Hi Team, I have developed .NET sample MSMQ sender and receiver Console application. I have tried Instrumenting that application. I could load the profiler and was able to see the MQ Details and transaction snapshots for sender application, but was unable to get MQ details for receiver application in AppDynamics controller. But we are expecting MSMQ Entry point for .NET consumer application. I have tried resolving the issue by adding POCO entry points which AppDynamics has been mentioned in the below link, but it didn’t help. Message Queue Entry Points (appdynamics.com) Please look into this issue and help us to resolve this. Thanks in advance.
hi, how can I change the scheduled index time of a data source?
@dtburrows3  Thank you for your support! It work!
This was a writeup that I did for this   Backup Splunk Stop and Backup the entire Splunk folder if able. /opt/splunk/bin/splunk stop   tar -zcvf splunk_pre_secret.tar.gz /opt/splunk/etc   F... See more...
This was a writeup that I did for this   Backup Splunk Stop and Backup the entire Splunk folder if able. /opt/splunk/bin/splunk stop   tar -zcvf splunk_pre_secret.tar.gz /opt/splunk/etc   Find encrypted passwords find /opt/splunk/etc -name '*.conf' -exec grep -inH '\$[0-9]\$' {} \;   Record the context (file location, stanza, parameter) Can decrypt the hashed passwords with the following /opt/splunk/bin/splunk show-decrypted --value 'PASSWORDHASH'   Updating the splunk.secret Copy the splunk.secret file from 192.168.70.2 to /opt/splunk/etc/auth/splunk.secret on the target system. cp /home/dapslunk/splunk.secret /opt/splunk/etc/auth/splunk.secret   Ensure the permissions are correct 400 ll /opt/splunk/etc/auth/splunk.secret   Update all of the password sections Use the following to find any missed passwords that have not been corrected. find /opt/splunk/etc -name '*.conf' -exec grep -inH '\$[0-9]\$' {} \;   Restart Splunk /opt/splunk/bin/splunk restart   Verify Access to Splunk GUI If any splunk commands that require authentication work Connection to license master /cluster/ deployment server If any inputs have data coming in If LDAP authentication works If all passwords are encrypted. Use the command from before.
I think this SPL should do what you are asking.       <base_search> | stats count as count by appName, resultCode | eventstats sum(count) as app_total_coun... See more...
I think this SPL should do what you are asking.       <base_search> | stats count as count by appName, resultCode | eventstats sum(count) as app_total_count by appName | eval pct_of_total=round(('count'/'app_total_count')*100, 3) | chart sum(pct_of_total) as pct_of_total over appName by resultCode | fillnull value=0.000         Using dummy data on my local instance I was able to get the output to look like this. where each cell value represents the percentage of events that occurred with that status code for each app. For a full overview of how I simulated locally you can reference this code.       | makeresults | eval appName="app1", resultCode=500 | append [ | makeresults | eval appName="app1", resultCode=500 ] | append [ | makeresults | eval appName="app1", resultCode=split("404|404|200|404|500|404|200", "|") ] | append [ | makeresults | eval appName="app2", resultCode=split("200|200|404|200", "|") ] | append [ | makeresults | eval appName="app3", resultCode=split("404|200|200|200|500|404|200", "|") ] | mvexpand resultCode ``` below is the relevant SPL ``` | stats count as count by appName, resultCode | eventstats sum(count) as app_total_count by appName | eval pct_of_total=round(('count'/'app_total_count')*100, 3) | chart values(pct_of_total) as pct_of_total over appName by resultCode | fillnull value=0.000     Edit: And to use this at scale (more that 10 unique status_codes) you would need to add a limit=<number> parameter to the chart command, otherwise it will populate another values named "OTHER" after 10 unique values are transformed. This is what a bigger table would look like Notice there is also another field at the right of the table to give more context at the total count of status codes seen for each application so that percentages can be inferred to a total count if desired. Code to do this would look something like this. (Added comments for each line to add detail)     <base_search> ``` simple count of events for each unique combo of appName/resultCode values ``` | stats count as count by appName, resultCode ``` sum up the counts accross each unique appName ``` | eventstats sum(count) as app_total_count by appName ``` add aditional rows as a Total count for each unique appName value (This is optional since this step is just providing more context to the final table) ``` | appendpipe [ | stats sum(count) as count by appName | eval resultCode="Total", app_total_count='count' ] ``` calculate percentage of occurrence for a particular resultCode across each unique appName with the exception of the Total rows, for these we want to just carry over the total app count (for context) ``` | eval pct_of_total=if( 'resultCode'=="Total", 'count', round(('count'/'app_total_count')*100, 2) ) ``` This chart command builds a sort contingency table, using the derived 'pct_of_total' values and putting them in there respective cell when appName values are in the left-most column and using the values from resultCode as the table header. Note: If a specific resultCode that showed for one app that did not show for another, then charting this way will leave a blank slot in that cell. To get around this we will use a fillnull command in the next step of the search. Adding the limit=<number> parameter sets your upper limit of the number of columns to generate for each unique resultCode value. This defaults to 10 if not specified and for resultCode values exceeding the top 10 will fall into the bucket "OTHER" and be tabled that way. Example: If you only want to see the top 3 resultCode values in the table and dont want an additional column "OTHER", you have the option to also set 'useother=false' along with 'limit=3' ``` | chart limit=100 useother=true sum(pct_of_total) over appName, by resultCode ``` renaming the "Total" column name to something more descriptive ``` | rename Total as "Total Count for App" ``` fill any blank cells with 0 as this is the implied percentage if it is null ``` | fillnull value=0.00      
Hi , How long after did you get the results ?
Hi @Mahendra.Shetty, As you noticed, this question was asked over a year ago. It may not get a reply. I would suggest asking the question fresh on the community or you can even try contacting Suppo... See more...
Hi @Mahendra.Shetty, As you noticed, this question was asked over a year ago. It may not get a reply. I would suggest asking the question fresh on the community or you can even try contacting Support since you are a partner. How do I submit a Support ticket? An FAQ  If you happen to find a solution, please do share it here or on your new post, if you make one.
Hey, thanks for the feedback.  Yes, I tried stats/eventstats.  I can get the counts to show, but I don't see how I can get the query to count up the total per app, then use that to calculate the perc... See more...
Hey, thanks for the feedback.  Yes, I tried stats/eventstats.  I can get the counts to show, but I don't see how I can get the query to count up the total per app, then use that to calculate the percent of each error per app.  Everything I try, like the snips I showed in my question just don't seem to get me closer to being able to show percentages.