Hi @jianw223 , I am not a Cribl employee or in any other way related. Just a happy Cribl user who set it up for the exact same use case for a customer. Not slow, not buggy in my experience. BR Ralph
... View more
Hi @jjarevalo , I created a "AS400 Monitoring App" for Splunk a while ago. Customer was more interested in the Jobs processed by the AS400 instead of the DB2 Performance/Availability, but maybe this helps you: We have a script that sits on the AS400 that has access to the DB2. For your requirement, maybe this is enough: qsh -c "db2 -S \"SELECT TOTAL_JOBS_IN_SYSTEM,ACTIVE_JOBS_IN_SYSTEM,AVERAGE_CPU_UTILIZATION FROM TABLE(QSYS2.SYSTEM_STATUS(RESET_STATISTICS=>'YES'))\"" | grep -v -e "----" -e "SELECTED" -e "^$" (maybe you have to change the SELECT statement details and/or the grep stuff afterwards, based on your setup) This would give you something like: TOTAL_JOBS_IN_SYSTEM ACTIVE_JOBS_IN_SYSTEM AVERAGE_CPU_UTILIZATION
3630 223 2.81 This script is called by a scripted input in Splunk. Can be a simple sh/python, that just does: ssh user@as400_host "/path/to/script" The python version would be using paramiko e.g. If you call this on a regular basis, you could already create a simple dashboard and alerting. You could alert on high CPU usage, you could alert if there are no Jobs processed (Active Jobs = 0, or Active Job count does not change over a period of time) and of course you could alert if the script does not return any data. It MIGHT mean the DB2/AS400 has an issue (or maybe your input app 😉 ) Hope it helps to give you an idea BR Ralph
... View more
Hi @supriyamore , Yea, I understand that. You would'nt be able to really work with the Pie having +500 slices. Maybe you have to define your requirement in more detail. What is your expectation when you click on the Other slice of the Pie? Splunk can not guess which of the 500 "others" the user wants more details too. (Well, maybe the DeepLearningToolKit could learn that over time 🤔😄) Just 2 halfbaked ideas: - You could use the token "Other" to start a drilldown search, that removes the top 10, so that on the next Dashboard you have a Pie without them. Or not even a Pie, but a statistic table that shows the data, so the user is able to see what are the "others". - Provide a "Others" Pulldown Menu next to the Pie, so that the Users can select the "Other"item that they need details too. So they can either click on a specific slice of the pie or select one from the Pull down. BR Ralph
... View more
Hi @supriyamore , When you click on "Format" in the Visualization tab, you can specify the minimum %entage for values to be not grouped as "others". Also, if you use "chart" command to put together the data for the pie chart, you can specify "useother=f". But the above mentioned format settings seems to outrule that one, as I just found out. (tested w. Splunk 8.0.1). BR Ralph
... View more
Hi @zekiramhi, Is the user that runs Splunk (I guess "splunk") able to read the files in the monitoring stanza? Sourcetype is not mandatory (but recommended). Per Documentation: "If not set, the indexer analyzes the data and chooses a source type." BR Ralph
... View more
Hi @bharathkumarnec , That's weird. What makes you think, that these messages reach syslog-ng at all? Where do you see the error message you mentioned? Maybe you see a more detailed error message, when you run syslog-ng in foreground. Stop the daemon and then run: /opt/syslog-ng/sbin/syslog-ng -Fedv This will run Syslog-ng in foreground, so everything will go to stdout. If you get a lot of messages, you maybe want to pipe that to a file and run it for a short period only. To see the messages regardless of what syslog-ng does to them, you can try: tcpdump -i eth0 port 514 -v you maybe have to change the interface or port to meet your environment. (same here: if your screen explodes due to too many messages, pipe it to a file and run it just shortly) You can also run a grep against tcpdump. Grep for something unique to the cyberark Logs, if you get more via the same port. tcpdump -i eth0 port 514 -v | grep -C2 <cyber ark unique string> Maybe one of the options gets you closer to find the rootcause. Cheers Ralph
... View more
Hi Klaudia, There is nothing wrong with your SPL 🙂 And you found a workaround (Set the Alert trigger before adding the "KB"), so this is just cosmetic: You could just change the last 5 evals to fieldformat. That way, the values are still numbers, but display for us silly humans with "KB" (or %) 😛 | eval total_KB_bytes=total_KB_bytes."KB"
change to => | fieldformat total_KB_bytes=total_KB_bytes."KB"
| eval KB_bytes_in=KB_bytes_in."KB"
change to => | fieldformat KB_bytes_in=KB_bytes_in."KB" and so forth with all 5 evals. Now you can still calculate/compare/whatever with the values, regardless of the "KB" added. Cheers Ralph
... View more
Hi @klaudiac, Try fieldformat. It does change the way a value is displayed, but does not change the underlaying value/type of data. | eval kb=bytes/1024
| fieldformat kb=round(kb,2)."kb" Hope it helps. BR Ralph
... View more
Hi @RonD , Maybe not the most dynamic solution, but it should work: | eval latest_date=strftime(max(strptime(agent1_date,"%m/%d/%Y"),strptime(agent2_date,"%m/%d/%Y"),strptime(agent3_date,"%m/%d/%Y"),strptime(agent4_date,"%m/%d/%Y"),strptime(agent5_date,"%m/%d/%Y")),"%m/%d/%Y") The strptime and strftime is to convert it to epoch and back to "humand readable" in order to make the dates comparable. Hope it helps BR Ralph
... View more
Hi @jboustead , Before you run the timechart, add this: | streamstats count as remove_trigger by date_mday reset_on_change=true
| where remove_trigger>3 This would remove the 3 latest/most recent events per day. Make sure it works if the month changes in the events (and you have 2 different days with "1" as date_mday for example), because I am not sure. You would have to add the month to the streamstats maybe. Hope it helps. BR Ralph
... View more
Hi @bharathkumarnec , You could try the no parse flag (flags(no-parse)) for the source defintion in the syslog-ng config. If the error relates to the format, this could at least help to get the data coming in. It puts everything in the $message macro if you do that. You might end up with duplicate timestamps or stuff like that. You can work around that with templates on the destinations and/or rewrite rules. It also helps to see what the messages look like when they come in, with tcpdump e.g. Maybe it's something weird syslog-ng can not work with at all. Hope it helps. BR Ralph
... View more
Hi @rizwan0683, I prefer timechart over bin ... _time | timechart sum(foo_counter) as foos, sum(bar_counter) as bars Then you select "column bar" on the visualization tab -> click on "format" -> "chart overlay" and in the text field at the top you type in "bars". Now you have foos as columns, overlayed by bars as line chart. Cheers Ralph
... View more
Hi @rizwan0683 , Glad I could help. To see what it does, just run the query without the last line, like this: | makeresults
| eval foo="1 2 3 2 5 2 1 5"
| makemv foo
| mvexpand foo
| eval foo_counter=if(foo=1 OR foo=2, 1, 0) => As you can see, it adds an field called "foo_counter" to all the rows and evals it to "1" if foo is "1" or "2". The "0" is basically the "else" value, so everything but 1 and 2 will be evaled to 0. When you sum foo_counter up (which the stats command does), you get the count of 1s and 2s. Ralph
... View more
Hi @BSingh27 , This was answered an hour ago Mr. @richgalloway here: https://community.splunk.com/t5/Splunk-Search/how-to-table-the-hosts-missing-in-splunk-with-lookup-file-for/m-p/529163/ BR Ralph
... View more
Hi @heamik, with fieldformat you should be good. It's exactly made to keep the original type of data. WIth this simple sample I could not reproduce your issue. After using fieldformat you can still calculate with mfp | makeresults
| eval mfp=50.23
| fieldformat mfp=round(mfp, 1)."%"
| eval mfp=mfp+10.45 You could check what Splunk "thinks" the mfp is with | eval mfp_type = typeof(mfp) Maybe something prior to the fieldformat already transforms it to a string? You could try | eval mfp = tonumber(mfp) Hope it helps. BR Ralph
... View more
Hi @rizwan0683 , This should do it: | makeresults
| eval foo="1 2 3 2 5 2 1 5"
| makemv foo
| mvexpand foo
| eval foo_counter=if(foo=1 OR foo=2, 1, 0)
| stats sum(foo_counter) as foo_counter You'll just nned the last 2 lines. The others are just to make up some sample data. Hope it helps. BR Ralph
... View more
Hi @avoelk , Are ActionDate and ActionTime already extracted fields that you can work with? If so, try this: | makeresults
| eval ActionDate="2013-04-24"
| eval ActionTime="00:07:00"
| eval _time= strptime(ActionDate +" " + ActionTime,"%Y-%m-%d %H:%M:%S") You will just need the last line. It puts the Date and Time in one string and converts it to an epoch timestamp (this is what _time needs). Edit: Removed a step that was not needed from my first approach. If the fields are not yet extracted, you can just pipe the xml to xmlkv and then use the last line of my SPL: <your search to get the xml event(s)>
| xmlkv
| eval _time= strptime(ActionDate +" " + ActionTime,"%Y-%m-%d %H:%M:%S") Hope it helps. BR Ralph
... View more
Alright, quick and dirty is to add the following stanza to the file $SPLUNK_HOME/etc/system/local/inputs.conf and restart the forwarder [monitor://C:\path\to\your\logfile\]
disabled = 0
index = <indexname>
sourcetype = <sourcetype> Note: The index needs to exist in Splunk and it should reflect the data that it contains. Maybe there is already an index that fits to the data, if not you would have to create one (another topic). You could check the stanzas that are already in the inputs.conf, or do some searches like index=* | stats count by index, sourcetype (not verbose, and only for a timeframe of a few hours) to get a feeling how the data is setup in your environment. The sourcetype is your choice, but again should be related to the data. Example: When adding network devices, you could call the index "dell" and the sourcetype "dell:switches". Not sure what kind of logs you are ingesting.... Are you the Admin of the Splunk Environment? I would suggest to at least do the Fundamentals I & II courses. If you are not the Admin, then ask them what index and sourcetype you should choose. Also they probably want to create an app for the input instead of adding the stanza to the "main" inputs.conf. Hope this helps. BR Ralph
... View more
Hi @Glace , Some things need to be considered: - What kind of OS is runing in the host where the logs are located? - Is there already data being sent from that device(s) to Splunk? - Which version of Splunk are you running: Version Nr? Cloud or On-Prem? - Are Splunk and the source VM running in the same network? But basically, the most common way is to use a Universal Forwarder and monitor the folder where these log files are located. The time/date should be recognized by Splunk without any further configurations. Install this: https://www.splunk.com/en_us/download/universal-forwarder.html Configure as described here: https://docs.splunk.com/Documentation/Splunk/8.1.0/Data/Monitorfilesanddirectorieswithinputs.conf BR Ralph
... View more
Hi @naknake , Usually by right-click on the column -> "rename this column". I guess that's what you used to do. What is different now in the new company? BR, Ralph
... View more
Hi @TravisT , Can't help you with the Addon Builder, but 2 other options: 1. Use this App from Splunkbase. Should be ready-to-go. https://splunkbase.splunk.com/app/1546 2. Put your Curl command in a .sh script, or create a Python that does the same and use a Scripted input. The stdout of the script will be your indexed events in Splunk. BR Ralph
... View more
Hi @adrianrepublic , You could add this at the end of your search, to get a column with today's date: |eval today=strftime(now(), "%Y-%m-%d") Or this, if you prefer epoch |eval todayepoch=now() The field should then be also created in the csv. Hope that works for you. BR Ralph
... View more