All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team,  I have created a dashboard with the below mentioned query and the output would be in Column chart format with a timer. And the output would display the Top 10 Event Codes with Count when w... See more...
Hi Team,  I have created a dashboard with the below mentioned query and the output would be in Column chart format with a timer. And the output would display the Top 10 Event Codes with Count when we choose the time and click submit. index=windows host=* source=WinEventLog:System EventCode=* Type=Error OR Type=Critical | stats count by EventCode Type |sort -count | head 10 So post the results are displayed in the Column chart then my requirement is that if we click any one of the EventCode  consider as an example of 4628 from the Top 10 details in the Column Chart then it should  navigate to a new panel or a window showing up with the Top 10 host, source, Message, EventCode along with Count for Event Code 4628. So something like that we want to get the results displayed.  But this should happen if we click the EventCode from the Column chart of the existing dashboard. Example: index=windows host=* source=WinEventLog:System EventCode=4628 Type=Error OR Type=Critical | stats count by host source Message EventCode |sort -count | head 10   So kindly let me know how to achieve this requirement in a dashboard format.  
Already done, System Paths are monitored, but no file is ingested I think this is a security feature to exclude direct access to "/"   Monitored Directories: /... /.a... See more...
Already done, System Paths are monitored, but no file is ingested I think this is a security feature to exclude direct access to "/"   Monitored Directories: /... /.autorelabel /afs /bin /boot /boot/.vmlinuz-5.14.0-284.11.1.el9_2.x86_64.hmac /boot/.vmlinuz-5.14.0-284.30.1.el9_2.x86_64.hmac /boot/config-5.14.0-284.11.1.el9_2.x86_64 /boot/config-5.14.0-284.30.1.el9_2.x86_64 /boot/efi /boot/grub2 /boot/initramfs-0-rescue-d264ca908f764f5191a3c479f3e6f4bc.img /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64.img /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img /boot/initramfs-5.14.0-284.30.1.el9_2.x86_64.img /boot/initramfs-5.14.0-284.30.1.el9_2.x86_64kdump.img /boot/loader /boot/symvers-5.14.0-284.11.1.el9_2.x86_64.gz /boot/symvers-5.14.0-284.30.1.el9_2.x86_64.gz /boot/System.map-5.14.0-284.11.1.el9_2.x86_64 /boot/System.map-5.14.0-284.30.1.el9_2.x86_64 /boot/vmlinuz-0-rescue-d264ca908f764f5191a3c479f3e6f4bc /boot/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 /boot/vmlinuz-5.14.0-284.30.1.el9_2.x86_64 /dev /dev/almalinux /dev/block /dev/bsg /dev/cdrom /dev/char /dev/core /dev/cpu /dev/disk /dev/dma_heap /dev/dri /dev/fd /dev/hugepages /dev/initctl /dev/input /dev/log /dev/mapper /dev/mqueue /dev/net /dev/pts /dev/rtc /dev/shm /dev/snd /dev/stderr /dev/stdin /dev/stdout /dev/vfio /etc /home /lib /lib64 /media /mnt /proc /proc/acpi /proc/bus /proc/dma /proc/fb /proc/fs /proc/irq /proc/keys /proc/kmsg /proc/net /proc/sys /proc/tty /root /run /sbin /srv /sys /This_is_Just_A_Test /usr     This can be also guessed by "/This_is_Just_A_Test" path, which contains many .txt files. With "/..." they are skipped, with explicit,     [monitor:///This_is_Just_A_Test]     They are ingested I really think it's a security feature to prevent "/" to be fully accessed.
@mochocki @jkat54 @kiran_panchavat need your assistance here.
I shouldn't think so. I'd expect it rather to be a permissions/SELinux issue or something like that. Do splunk list monitor and splunk inputstatus
Hi @ITSplunk117, it seems that you have son inconsitancy in data files. Open a ticket to Splunk Support, sending them a diag of these indexer. Ciao. Giuseppe
Hi @allidoiswinboom, have you, on Indexers, also the Splunk_TA_F5? because it transforms logs and maybe your transformation isn't effective because the f5:bigip:syslog is already transformed in som... See more...
Hi @allidoiswinboom, have you, on Indexers, also the Splunk_TA_F5? because it transforms logs and maybe your transformation isn't effective because the f5:bigip:syslog is already transformed in something else. using the regex, which sourcetype do your events have? Ciao. Giuseppe
Hi there. A simple question, it's not for a real usage, just a curiosity Does UF block inputs for system paths by default? An example, teorically an inputs like this   [monitor:///...] whitel... See more...
Hi there. A simple question, it's not for a real usage, just a curiosity Does UF block inputs for system paths by default? An example, teorically an inputs like this   [monitor:///...] whitelist=. index=root sourcetype=root_all disabled=0   Should ingest all non binary files under the "/" paths, including subdirs. At the real fact, i find only the "/boot" path ingested. Is this a security feature to exclude system paths "/" from been ingested? Thanks
It was an issue with NTP sync on the servers got it corrected and its fine now.
Splunk offline --enforce-count or data rebalance which one is better in case of migrating to new hardware and do i have to add peer to manual detention in indexer cluster before running a data rebala... See more...
Splunk offline --enforce-count or data rebalance which one is better in case of migrating to new hardware and do i have to add peer to manual detention in indexer cluster before running a data rebalance or splunkoffline?  
Hi @kiran_panchavat , Thank you for your valuable response. Which input do I need to provide to collect the status of "App Services"? Options: Azure Metrics Azure Resource Azure Audit
@msmadhu You can follow the below documentations.  Getting Microsoft Azure Data into Splunk | Splunk  Splunking Microsoft Cloud Data: Part 1 | Splunk  Splunk Add-on for Microsoft Cloud Service... See more...
@msmadhu You can follow the below documentations.  Getting Microsoft Azure Data into Splunk | Splunk  Splunking Microsoft Cloud Data: Part 1 | Splunk  Splunk Add-on for Microsoft Cloud Services | Splunkbase  Splunk Add-on for Microsoft Cloud Services - Splunk Documentation  Getting Microsoft Azure Data into Splunk | Splunk
Hi @Roy_9, Changes should be logged in index=_audit: index=_audit host IN (sh) action=modified info=succeeded savedsearch_name=xyz earliest=-30d Replace "sh" with a list of your search head host n... See more...
Hi @Roy_9, Changes should be logged in index=_audit: index=_audit host IN (sh) action=modified info=succeeded savedsearch_name=xyz earliest=-30d Replace "sh" with a list of your search head host names and "xyz" with the name of the report.
Hello @yuanliu  Sorry if I did not explain it clearly. Yes, I am clear on how the span and snap works in Splunk. I appreciate your help as always.  Thank you!!
Hello @bowesmana  Your last suggestion exactly what I am looking for.   I accepted as solution. Thank you for your help This is what I am trying to do.    Drop down choose weekly token Sunday[... See more...
Hello @bowesmana  Your last suggestion exactly what I am looking for.   I accepted as solution. Thank you for your help This is what I am trying to do.    Drop down choose weekly token Sunday[weeky_token=0],   Monday[weeky_token=1], Tuesday[weekly_token=2], Wednesday[weekly_token=3], Thursday[weekly_token=4], Friday[weekly_token=5], Saturday[weekly_token=6] | table _time, Student, MathGrade, EnglishGrade, ScienceGrade | where strftime(_time, "%w") = "$weekly_token$" | fields - info* | timechart span=1w first(MathGrade) by Student useother=f limit=0  
Hi,   I got one weird problem that when I run query in splunk, there're events found, but the Event log field is always blank.   However, the problem will be fixed by below steps: in the Splunk... See more...
Hi,   I got one weird problem that when I run query in splunk, there're events found, but the Event log field is always blank.   However, the problem will be fixed by below steps: in the Splunk search result, go to All Fields at left rail change Coverage: 1% more to All fields click Deselect All click Select All Within Filter Then the problem is fixed, I can see the Event logs. Even I change from All fields back to Coverage: 1% more, I can still see Events logs. But after I close the browser tab and go to Splunk and search again, the problem still exists. So the problem is, why Coverage: 1% more will have problem to me for the first time query?   Anyone has idea about this? Thanks.  
Again, this is a data problem.  As you described in OP, field_E and field_A only exist in source_1, field_C and field_B only exist in source_2.  When you get the results reminiscent to your latest il... See more...
Again, this is a data problem.  As you described in OP, field_E and field_A only exist in source_1, field_C and field_B only exist in source_2.  When you get the results reminiscent to your latest illustration, do you have evidence that field_B in source_2 has a value of 11111179 AND a non-null value of field_C?  In other words, does this give you any result in the exact search period you used to obtain that table? index=index_1 sourcetype=source_2 earliest=1 latest=now() field_C=* field_B = 11111179 (By the way, I don't think you posted the correct mock code because earliest=1 latest=now() will always give you no results, therefore none of rows should have field_C populated.  Unless your Splunk has a time machine in it.)
@LearningGuy In case it is not obvious, strftime(_time, "%w") is precisely performing a modulus function.  Why reinventing the wheel?  Not only that, when you do | bin _time span=1w@w as @bowesmana a... See more...
@LearningGuy In case it is not obvious, strftime(_time, "%w") is precisely performing a modulus function.  Why reinventing the wheel?  Not only that, when you do | bin _time span=1w@w as @bowesmana and I suggested, Splunk does a division, a modulus, then a negative time shift.  Similarly, when you do relative_time(_time, "+5d@d"),  Splunk does a modulus (on day, not week) then a positive time shift.  Splunk is all math.  All these built-in functions have been subjected to hours and hours of QA tests, and require zero maintenance on programmer's part.  I'd recommend using these robust methods. But really, I failed to comprehend the followups because you didn't not describe exactly what results you got from the proposed formulae and explain why the results did not meet your requirements.  @bowesmana and I have been explaining to you why some counts are displayed on a date string that you believe should not have counts.  But if you count by week, the starting date of your counting method SHOULD be the display date.  This means that if your data begins on a Wednesday but your week starts on Sunday, the first data point SHOULD appear under Sunday, NOT on Wednesday.  If you shift your week's start to Wednesday, but your data only starts on Friday, the very first data point SHOULD appear under Wednesday, NOT on Friday.  Are we clear on this?
If I read your samples right, each of that "data" block is its own event.  Is this correct? (By the way, you would help volunteers and yourself greatly if you post sample/mock data in raw text format... See more...
If I read your samples right, each of that "data" block is its own event.  Is this correct? (By the way, you would help volunteers and yourself greatly if you post sample/mock data in raw text format which is JSON compliant; Splunk's beautified display is not.)  In that case, Splunk would have given you three fields of your interest: data.entity_id, data.message-name, and data.total-received.  Do you get these? Assuming both assumptions are correct, xyseries is your friend, like this   | stats sum(data.total-received) as subtotal by data.message-name data.entity-id | xyseries "data.entity-id" "data.message-name" subtotal   Your mock events give you data-entity-id handshake switchPlayer 1 10 42 2 12 55 Note, I reconstructed JSON compliant events as the following:   { "data": { "entity-id": 1, "message-code": 445, "message-name": "handshake", "total-received": 10 } } { "data": { "entity-id": 1, "message-code": 269, "message-name": "switchPlayer", "total-received": 20 } } { "data": { "entity-id": 1, "message-code": 269, "message-name": "switchPlayer", "total-received": 22 } } { "data": { "entity-id": 2, "message-code": 445, "message-name": "handshake", "total-received": 12 } } { "data": { "entity-id": 2, "message-code": 269, "message-name": "switchPlayer", "total-received": 25 } } { "data": { "entity-id": 2, "message-code": 269, "message-name": "switchPlayer", "total-received": 30 } }   This is an emulation you can play with and compare with real data   | makeresults | eval data = split("{ \"data\": { \"entity-id\": 1, \"message-code\": 445, \"message-name\": \"handshake\", \"total-received\": 10 } } { \"data\": { \"entity-id\": 1, \"message-code\": 269, \"message-name\": \"switchPlayer\", \"total-received\": 20 } } { \"data\": { \"entity-id\": 1, \"message-code\": 269, \"message-name\": \"switchPlayer\", \"total-received\": 22 } } { \"data\": { \"entity-id\": 2, \"message-code\": 445, \"message-name\": \"handshake\", \"total-received\": 12 } } { \"data\": { \"entity-id\": 2, \"message-code\": 269, \"message-name\": \"switchPlayer\", \"total-received\": 25 } } { \"data\": { \"entity-id\": 2, \"message-code\": 269, \"message-name\": \"switchPlayer\", \"total-received\": 30 } }", " ") | mvexpand data | rename data AS _raw | spath ``` data emulation above ```  
Assuming these are separate events and the fields are auto extracted JSON fields, then this statement will give you your table ... your search ... | chart sum("data.total-received") over data.entity... See more...
Assuming these are separate events and the fields are auto extracted JSON fields, then this statement will give you your table ... your search ... | chart sum("data.total-received") over data.entity-id by data.message-name As for having a dropdown where you can choose the message codes you want to display, have a multiselect input and use a populating search that does ... your search ... | stats count by data.message-name and use the field for label/field for value settings to assign the name. You will have to work out the tokenisation to that in your search for the data to show in the table you can filter out the ones you want. Also, you could use a single base search to drive the population of the dropdown as well as the results for the table, which would improve your dashboard load times, but I'll leave that as an exercise for you to play with.  
Assuming these are separate events and that you have or can extract the fields from your event data, try something like this | chart sum('total-received') as total over 'entity-id' by 'message-name'