All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, I setup splunk and trying to capture security logs from the client machine.My VM is setup as server / client with active directory group setting.But i am getting diskspace error."The diskspa... See more...
Hi All, I setup splunk and trying to capture security logs from the client machine.My VM is setup as server / client with active directory group setting.But i am getting diskspace error."The diskspace remaining =9620 has breached the yellow threshold for filesystems=C:]Program Files \splunk\var\lib\splunk\_metrics\colddb. But i have free space in c drive.Please clarify  
Thanks @PickleRick ,  the query line you posted is not supported, not if it was before. Splunk is erring out saying unknown value 0
Hi, We re measuring with a code snippet like the below, the time before we called the logger subtract the time the logger completed. Long start1 = System.currentTimeMillis(); log.info("Test loggin... See more...
Hi, We re measuring with a code snippet like the below, the time before we called the logger subtract the time the logger completed. Long start1 = System.currentTimeMillis(); log.info("Test logging"); Long start2 = System.currentTimeMillis(); log.info("logTime={}", start2 - start1); We have not use a tcpdump yet as this is running on a container, not able to use batch too since we needed type=raw which don't support batch configs from my understanding. Is there a way to work with Raw type and send as batch? Thanks.
So there's a bug with installing Splunk Enterprise 9.2.x and the universal forwarder on the same server, something that should work. I have opened a case with Splunk and requested them to document th... See more...
So there's a bug with installing Splunk Enterprise 9.2.x and the universal forwarder on the same server, something that should work. I have opened a case with Splunk and requested them to document the issue in the known issues. They have not done that yet. 
Ah, then I guess we have different understandings of what "download a dashboard" means. The software you want is Splunk Enterprise Security.  It's a premium product, meaning it is available for down... See more...
Ah, then I guess we have different understandings of what "download a dashboard" means. The software you want is Splunk Enterprise Security.  It's a premium product, meaning it is available for download only by customers who have paid for it.  Contact your Splunk account team for more information.
Hi Giuseppe, Thank you for highlighting the mistake. I corrected the variable to newValue2 but unfortunately I found no luck  with the  query.
@andrew_nelson  I was able to set it up with a calculated field! It was a basic thing, but it was very helpful. I'm going to study! thank you very much.
I don't want a pdf.  I want a piece of software running in my Splunk Enterprise.   Thanks
Thank you so much, one of our stanzas looks like the following -   [script://./bin/ulimit.sh] interval = 27 5 * * * source = scripted_input sourcetype = virtualization:sanity:ulimit index = os disa... See more...
Thank you so much, one of our stanzas looks like the following -   [script://./bin/ulimit.sh] interval = 27 5 * * * source = scripted_input sourcetype = virtualization:sanity:ulimit index = os disabled = false     Based on the link you provided, a reload should be fine. How would we run a "reload"? inputs.conf http reload inputs.conf script reload inputs.conf monitor reload inputs.conf <modular_input> reload inputs.conf batch reload
hey, how did you solve it then. i am having the same issue. is there a way to switch to 32? i am very very new in this
Have you tried printing the dashboard to a PDF?
I would like to download the Security Posture Dashboard.   The document “Security Posture dashboard” does not include a download link: https://docs.splunk.com/Documentation/ES/7.3.1/User/SecurityP... See more...
I would like to download the Security Posture Dashboard.   The document “Security Posture dashboard” does not include a download link: https://docs.splunk.com/Documentation/ES/7.3.1/User/SecurityPosturedashboard
What are some good dashboards for displaying data ingested from AWS CloudWatch/CloudTrail?   thanks in advance 
I fixed the error of "Can't read key file" by putting the contents of my server private key into the pem file. Using this two commands can properly show information now: openssl rsa -in /opt/splunk... See more...
I fixed the error of "Can't read key file" by putting the contents of my server private key into the pem file. Using this two commands can properly show information now: openssl rsa -in /opt/splunk/etc/auth/mycerts/myServerCertificate.pem -text openssl x509 -in /opt/splunk/etc/auth/mycerts/myServerCertificate.pem -text -noout openssl rsa is properly showing the rsa private key (modulus, prime etcetc) now. openssl x509 works fine as i mentioned before. However, splunkd.log still shows sslv3 alert certificate unknown. Thanks.  
|rest /servicesNS/-/-/data/ui/views splunk_server=local ``` Produces all views that are present in local searchhead ``` | table id,updated,eai:acl.removable, eai:acl.app ```eai:acl.removable tells ... See more...
|rest /servicesNS/-/-/data/ui/views splunk_server=local ``` Produces all views that are present in local searchhead ``` | table id,updated,eai:acl.removable, eai:acl.app ```eai:acl.removable tells whether the dashboard can be deleted or not. removable=1 means can be deleted. removable=0 means could be system dashboard``` | rename eai:acl.* as * | rex field=id ".*\/(?<dashboard>.*)$" | table app dashboard updated removable | join type=left dashboard app [search index=_audit ```earliest=<setasperyourneeds> host=<yoursearchhead>``` action=search provenance="UI:Dashboard:*" sourcetype=audittrail savedsearch_name!="" | stats earliest(_time) as earliest_time latest(_time) as latest_time by app provenance | convert ctime(*_time) | rex field=provenance ".*\:(?<dashboard>.*)$" | table earliest_time latest_time app dashboard ```produces dashboards that are used in timerange given in earliest/global time range```] | where isnull(earliest_time) AND removable=1 ``` condition to return only dashboards that are not viewed ``` | stats values(dashboard) as dashbaord by app
Hi @splunky_diamond , it's always a pleasure! Ciao. Giuseppe
Thank you very much @gcusello !  You never fail to deliver best solutions for splunk newbies like me
Hi @splunky_diamond , the best guide in ad-on creation is the Splunk Add-On Builder app (https://splunkbase.splunk.com/app/2962). It guides you in the creation and in the normalization of your data... See more...
Hi @splunky_diamond , the best guide in ad-on creation is the Splunk Add-On Builder app (https://splunkbase.splunk.com/app/2962). It guides you in the creation and in the normalization of your data to have a CIM compliant data flow that you can use also in ES or ITSI. Ciao. Giuseppe
Hello Splunkers! I am collecting logs from Fudo PAM for which I haven't found any suitable existing add-on on the Splunk Base website. The logs are being collected over syslog, yet the regular "sy... See more...
Hello Splunkers! I am collecting logs from Fudo PAM for which I haven't found any suitable existing add-on on the Splunk Base website. The logs are being collected over syslog, yet the regular "syslog" sourcetype doesn't suit the events coming from my source. I was searching the web for some tutorials on how to create your own add-on in Splunk in order to parse the unusual logs like in my case, but I haven't found any.  Could someone please help me with that? Does anyone have any tutorial or guide on how to create your own parser, or can maybe explain what is needed for that, in case it's not a difficult task? If someone decides to provide answer themselves, by explaining how to create your own add-on, I would really appreciate detailed description that will involve such notes as: required skills, difficulty, how long it will take, and whether it's the best practice in such situations or there are more efficient ways. Again, the main goal for me is to get my logs from Fudo PAM (coming over syslog) parsed properly.  Thank you for taking your time reading my post and replying to it
 1. My task is to calculate number of events with "FAILED" value in "RESULT" key, it looks like this and it works (thanks to you guys!) - `index="myIndex" sourcetype ="mySourceType" | foreach "*DEV*"... See more...
 1. My task is to calculate number of events with "FAILED" value in "RESULT" key, it looks like this and it works (thanks to you guys!) - `index="myIndex" sourcetype ="mySourceType" | foreach "*DEV*" "UAT*" [| eval keep=if(isnotnull('<<FIELD>>'), 1, keep)] | where keep==1 | stats count(eval('RESULT'=="FAILED")) as FAILS | stats values(FAILS)` This gets even more confusing. 'number of events with "FAILED" value in "RESULT" key' implies that you already have a field (key) named "RESULT" that may have a value of "FAILED".  If this is correct, shouldn't your search begins with index="myIndex" sourcetype ="mySourceType" RESULT=FAILED? | stats count(eval('RESULT'=="FAILED")) as FAILS gives one single numeric value.  What is the purpose of cascading |statsvalues(FAILS) after this? | stats count(eval('RESULT'=="FAILED")) as FAILS | stats values(FAILS) gives the exact same single value. Most importantly still, as @PickleRick and I repeatedly point out, Splunk (and most programming languages) do not perform sophisticated calculations in name space, mostly because there is rarely need to do so.  When there is a serious need for manipulating variable name space, it is usually because the upstream programmer made poor design.  In Splunk's case, it is super flexible in handling data without preconceived field names.  As @bowesmana suggested, if you can demonstrate your raw data containing those special keys, it is probably much easier (and more performant) to simply use TERM() filter to limit raw events rather than trying to apply semantics in extracted field names. (TERM is case insensitive by default.)  If you find TERM() too limiting, you can also use Splunk's super flexible field extraction to extract environment groups "Prod" and "Dev" using regex.  This way, all you need to do is index="myIndex" sourcetype ="mySourceType" RESULT=FAILED environment=Dev | stats count You can even do something like index="myIndex" sourcetype ="mySourceType" RESULT=FAILED | stats count by environment Any of these alternatives is better in clarity and efficiency.