All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@bowesmana  Does the "Score" variable on "eval" pipe always indicate a total Score if applied after "addcoltotals"? Thank you so much. | addcoltotals | eval Score = if(isnull(Student), floor(Sco... See more...
@bowesmana  Does the "Score" variable on "eval" pipe always indicate a total Score if applied after "addcoltotals"? Thank you so much. | addcoltotals | eval Score = if(isnull(Student), floor(Score), Score)  
I'm counting the "kb" as volume of data received and ingected into Indexers. Is this wrong, so? So, what's the relation beetwen "kb" and "tcp_Kprocessed" ? I'm still in doubt
If I have already configured UDP 514 as an input, where is that input.conf file located so I can modify if for other logs?
Hello @verbal_666  I think this is the volume indexed.  
Hello @Snorre you should migrate Windows first. Personally I prefer using Linux like RHEL for Splunk servers if you can switch.
Hello @kasperl  can you check the ACL of this file? Is it root owned unlike splunkd process?  
Hello @landzaat could you provide screenshot of whole screen showing this error? Thanks. For your information schedule PDF delivery may create ScheduledView like this and maybe there is limit in nam... See more...
Hello @landzaat could you provide screenshot of whole screen showing this error? Thanks. For your information schedule PDF delivery may create ScheduledView like this and maybe there is limit in name size :       * If this helps, please upvote or accept solution *
Thanks @isoutamo . This works as expected and only thing is not grouping the the user_id but rather it's grouping by timeformat/_time every 1 hr.   is it possible to group by user_id? curren... See more...
Thanks @isoutamo . This works as expected and only thing is not grouping the the user_id but rather it's grouping by timeformat/_time every 1 hr.   is it possible to group by user_id? current spl: base search | rex user_id | eval takeIn = case (_time>=relative_time(now(),"@d") ,"take", _time<=relative_time(now(), "-1d"), "take", true(), "drop") | where takeIn = "take" | timechart span=1h count | timewrap d series=short | fields _time s1 s0 | rename s1 as today, s0 as yesterday | where today !=""
When trying to schedule a PDF delivery for a dashboard, the error message Parameter "name" must be 100 characters or less is displayed. The dashboard runs fine, export PDF has no issues. Where do I f... See more...
When trying to schedule a PDF delivery for a dashboard, the error message Parameter "name" must be 100 characters or less is displayed. The dashboard runs fine, export PDF has no issues. Where do I find this "name" parameter?
Hi @gcusello  Getting error  
Hello. We have made it work. This is the stanza we have configured in transforms.conf on the heavy forwarder: [setindexHIGH] SOURCE_KEY = field:topic REGEX = audits DEST_KEY = _MetaData:Index FOR... See more...
Hello. We have made it work. This is the stanza we have configured in transforms.conf on the heavy forwarder: [setindexHIGH] SOURCE_KEY = field:topic REGEX = audits DEST_KEY = _MetaData:Index FORMAT = imp_high Thanks for your help.
Hi @parthiban, ok, you could use this approach: create an alert that doesn't send an email running this search: index= "XXXXX" "Genesys system is available" | spath input=_raw output=new_field pat... See more...
Hi @parthiban, ok, you could use this approach: create an alert that doesn't send an email running this search: index= "XXXXX" "Genesys system is available" | spath input=_raw output=new_field path=response_details.response_payload.entities{} | mvexpand new_field | fields new_field | spath input=new_field output=serialNumber path=serialNumber | spath input=new_field output=onlineStatus path=onlineStatus | where serialNumber!="" | lookup Genesys_Monitoring.csv serialNumber | where Country="Bangladesh" | stats count(eval(onlineStatus="offline")) AS offline_count count(eval(onlineStatus="online")) AS online_count earliest(eval(if(onlineStatus="offline",_time,""))) AS offline_time earliest(eval(if(onlineStatus="online",_time,""))) AS online_time | fillnull value=0 offline_count | fillnull value=0 online_count | eval condition=case( offline_count=0 AND online_count>0,"Online", offline_count>0 AND online_count=0,"Offline", offline_count>0 AND online_count>0 AND online>offline, "Offline but newly online", offline_count>0 AND online_count>0 AND offline>online, "Offline", offline_count=0 AND online_count=0, "No data"), search="Device went offline and recovery status" | search condition="Offline" OR condition="Online" OR condition="Offline but newly online" | table search condition | collect index=summary then you can run a search like the following: index=summary search="Device went offline and recovery status" | stats dc(condition) AS condition_count last(condition) AS condition_last values(condition) AS condition | search (condition_last="Offline" condition_count=1) OR (condition_last="Online" condition_count>1) with an email action to informa yo that there's a new offline or there's an online adter te offline. Please check the conditions because I cannot do, but they should be correct. Ciao. Giuseppe
Hi, The AppD ansible collection for machine agent has an issue where if you want to change the values of tier, application, or node_name but they already have values in the conf file, you cannot cha... See more...
Hi, The AppD ansible collection for machine agent has an issue where if you want to change the values of tier, application, or node_name but they already have values in the conf file, you cannot change them without first uninstalling and then re-installing the agent. I can't give a link to the git repo, because the Ansible collection does not expose which git repo the collection was synced from. The collection page is https://galaxy.ansible.com/ui/repo/published/appdynamics/agents/, which also contains a tarball of the code. This specific code is both: roles/java/tasks/merging-controller-info.yml (starting line 98) roles/machine/tasks/merging-controller-info.yml (starting line 108) I can submit a PR for this if you point me to the git repo for it, or I would request that it either. Any suggestions or a way through?
@PickleRick , My aim is to save the license. Can you assist me in blacklisting some of the most common Windows security events
Hi @richgalloway    Can you please help me on this above requirement?
Hi @gcusello  I understood your first point. However, in our case, we don't want that type of requirement. Once we identify the offline message, we don't want to receive repeated alerts because we h... See more...
Hi @gcusello  I understood your first point. However, in our case, we don't want that type of requirement. Once we identify the offline message, we don't want to receive repeated alerts because we have already created a ticket for that incident. We only need to be notified once the device becomes online. Throttling won't work for our requirement I believe, as we already have a similar alert mechanism in our observability tools. We aim to implement the same in Splunk. For better understanding, I've provided an example.   Please guide me anyway to achieve this requirement. 1st search  - OFFLINE  - alert will trigger 2nd search - OFFLINE  - alert suppressed 3rd search  - OFFLINE  - alert suppressed 4th search  - ONLINE    - alert will trigger 5th search  - ONLINE    - alert suppressed 6th search  - OFFLINE - alert will trigger 7th search  - OFFLINE  - alert suppressed 8th search  - OFFLINE  - alert suppressed 9th search  - ONLINE  - alert will trigger 10th search  - ONLINE  - alert suppressed . . etc
I don't think any of those addons contains such functionality. You would need to explicitly gather such info with scripted input utilizing probably dmidecode in Linux case (which is an external tool ... See more...
I don't think any of those addons contains such functionality. You would need to explicitly gather such info with scripted input utilizing probably dmidecode in Linux case (which is an external tool and doesn't have to be installed on your system) and some other tool on Windows (I'm not even sure there is a standardized way of doing that with vanilla Windows installation).
Also please note (it's worth mentioning because that's not obvious) that if you aggregate some values into several multivalued fields (like in your case - multivalued VM field and col2 field) the con... See more...
Also please note (it's worth mentioning because that's not obvious) that if you aggregate some values into several multivalued fields (like in your case - multivalued VM field and col2 field) the contents of those multivalued fields are from now on independent on each other. So you can't for example - sort one using the order of the other. Again - it's not a spreadsheet.
I'm still not sure what are the source datasets and what should be the result. I see some attempts at solving this riddle in the thread but I'm not 100% sure we're all on the same page regarding what... See more...
I'm still not sure what are the source datasets and what should be the result. I see some attempts at solving this riddle in the thread but I'm not 100% sure we're all on the same page regarding what we're working with and what we want to achieve in the end. Could you please post samples of your data and what the result should look like?
While I didn't do comparison tests myself, the general consensus is that XML-rendered windows logs are the better choice. They do not cause problems with parsing (there were some problems with ambigu... See more...
While I didn't do comparison tests myself, the general consensus is that XML-rendered windows logs are the better choice. They do not cause problems with parsing (there were some problems with ambiguous data in the traditionally formated data I recall vaguely; probably more experienced with older versions colleagues could tell you more). Also they tend to be actually smaller than traditionally formatted logs.