All Topics

Top

All Topics

Hello everyone, I need your help Display the date and time of the monthly report update at the top of theReport.   Thank you so much      
Hi.   I see rather strange (for me, newbie) issue with number of *nix devices. After the UF agent install devices reported data for couple of days, but showed status "unstable". A day later dev... See more...
Hi.   I see rather strange (for me, newbie) issue with number of *nix devices. After the UF agent install devices reported data for couple of days, but showed status "unstable". A day later devices stopped updating in Splunk. On devices I found an error message. splunk.service - SYSV: Splunk indexer service    Loaded: loaded (/etc/rc.d/init.d/splunk; bad; vendor preset: disabled)    Active: inactive (dead)      Docs: man:systemd-sysv-generator(8) Warning: splunk.service changed on disk. Run 'systemctl daemon-reload' to reload -------------- I found that some people experienced similar issue and fixed with update of init.d script ------------- splunk_start() {   echo Starting Splunk...   ulimit -Hn 20240   ulimit -Sn 10240 ---------------- I implemented the proposed change and it did help for few days. Now I see devices being updated in Splunk on regular bases, but reported as "unstable" and no CPU/MEMORY/DISK data being reported. Thank you in advance 
We have a requirement to share a Splunk report externally and I`m aware we can achieve this by using iframes to embed the report. My question is who can access the iframe - assuming it`s everyone wh... See more...
We have a requirement to share a Splunk report externally and I`m aware we can achieve this by using iframes to embed the report. My question is who can access the iframe - assuming it`s everyone who has a copy of the iframe/URL - and if there`s a way to restrict access to certain users only ? Can the restriction be applied within Splunk, or does it need to be applied within the external web app that we`re sharing with ?
Hello, team I've made script, which uses the sudo command. I've deployed it on my forwarders, and I get the error: message from "/opt/splunkforwarder/etc/apps/app/bin/script.sh" sudo: effective uid... See more...
Hello, team I've made script, which uses the sudo command. I've deployed it on my forwarders, and I get the error: message from "/opt/splunkforwarder/etc/apps/app/bin/script.sh" sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges? My forwarders boot from splunk user (if change boot to root - script works). Splunk user is in sudoers, it have rights to execute sudo commands, but as far as I understand script must be executed with root user, not anyone else even if it have sudo privileges. /usr/bin/sudo - nosuid option not  set, and file system isn't NFS.   Tried to make owner of script root, and give to it setuid, but still not works. Any ideas? How to make script be executable by splunk user?
I use the Splunk DB Connect add-on to collect MS SQL DB audit logs. Recently, I updated the query so we don't collect some logs that are not needed. The collection runs every 5 minutes, because of th... See more...
I use the Splunk DB Connect add-on to collect MS SQL DB audit logs. Recently, I updated the query so we don't collect some logs that are not needed. The collection runs every 5 minutes, because of the new query there are quite a few times where the result is 0 logs (this is expected). The problem is that I started receiving the following error all the time:   ERROR ResultSetIterator:157 - action=print_record_failed row=null error="java.lang.NullPointerException"     How can I disable this "error"? It's flooding my _internal index and consuming storage for nothing.
Hi, I want to create alert when for 5 consecutive minutes the threshold breaches 70% ? The query I wrote is: sourcetype="os" identity_operation="GetUser" minutesago= 1 | eval EndpointName =... See more...
Hi, I want to create alert when for 5 consecutive minutes the threshold breaches 70% ? The query I wrote is: sourcetype="os" identity_operation="GetUser" minutesago= 1 | eval EndpointName = "Get User" | stats count by EndpointName | eval message = case(count >= 1 * 1200,"100% alert", count >= 0.9 * 1200,"90% alert", count >= 0.8 * 1200,"80% warning", count >= 0.7 * 1200,"70% warning")
So recently, I migrated from a standalone instance, to a clustered enviroment. Everything is working well, but there's this one thing, I have a vSphere server where it's previously configured to send... See more...
So recently, I migrated from a standalone instance, to a clustered enviroment. Everything is working well, but there's this one thing, I have a vSphere server where it's previously configured to send data to splunk, where we just specify the IP for the syslog server (in my case splunk) and the data arrives there. Now, So, , but I need it to forward this data using load balancing and Indexer discovery features, to forward data to different peers rather than 1 indexer. What's the best way to keep this happening - are there any ideas ? I was thinking of deploying another lightweight syslog server, which resends the vSphere logs to a splunk forwarder, where I can configure it for Load Balancing and Indexer Dsicovery, to resend data to splunk.
Folks, I tried to install Eventgen, however I looked no working after install instruction of GitHub. (Download from splunkbase and upload it into web UI and enabled.) My splunk is 9.0.4.1 on my wind... See more...
Folks, I tried to install Eventgen, however I looked no working after install instruction of GitHub. (Download from splunkbase and upload it into web UI and enabled.) My splunk is 9.0.4.1 on my windows laptop for testing. Does anyone gove me advice, where should I check active .conf file or where I can see this app is correctly working.
Hi, My single event length is too long so I want to extract and ingest the specific part from it. The part is in the middle of the event, so I tried extracting it using BREAK_ONLY_BEFORE and BREAK_O... See more...
Hi, My single event length is too long so I want to extract and ingest the specific part from it. The part is in the middle of the event, so I tried extracting it using BREAK_ONLY_BEFORE and BREAK_ONLY_AFTER. Also used the LINE_BREAKER function but it is not working as expected. How can we define start and end of the log in the props.conf file? Is there any alternative to achieve this? Log sample:
Hi guys,   In the license page for the Splunk Phantom , What is the difference between the "PHANTOM LICENSE INFORMATION" & "SPLUNK LICENSE INFORMATION". I assume the  "SPLUNK LICENSE INFORMATION"... See more...
Hi guys,   In the license page for the Splunk Phantom , What is the difference between the "PHANTOM LICENSE INFORMATION" & "SPLUNK LICENSE INFORMATION". I assume the  "SPLUNK LICENSE INFORMATION" is about the amount of data that we can fetch from Splunk Enterprise and and ingest into the  Splunk Phantom. Am I correct?
Hello, needs to remive a site in a three sites cluster. Following the instructions in https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Decommissionasite, and resuming the commands as foll... See more...
Hello, needs to remive a site in a three sites cluster. Following the instructions in https://docs.splunk.com/Documentation/Splunk/8.2.0/Indexer/Decommissionasite, and resuming the commands as follow: - Check if cluster is in complete state - Move Manager away from the decommissioned site - Remove the peers in decommissioned site as receivers for UF - Enter in maintenance mode - Modify server.conf (manager node ) from: available_sites = site1, site2, site3  to: available_sites = site1, site2 from: site_replication_factor = origin:1,site1:1,site2:1,site3:1,total:3  to: site_replication_factor = origin:2,total:3 from: site_search_factor = origin:1, total:2  to: site_search_factor = origin:1,total:2 add: site_mappings = site3:site1 - Restart the manager - Disable maintenance mode - Stop splunk on each peer on the decommissioned site - Waiting the cluster back in complete state - Remove peers How can i verify if all is gone as expected ?  check buckets, query ... Thanks
I have abruptly been unable to access Splunk ES with the error message as "Fetch failed: authentication/current-context" What could be the issue and if there is any resolution for the same. Thanks.
I have the following JSON structure in my events. I am trying to figure out an SPL Query to format the JSON in a table for a dashboard. The names of the WLCs could change, so WLC-1 will not always be... See more...
I have the following JSON structure in my events. I am trying to figure out an SPL Query to format the JSON in a table for a dashboard. The names of the WLCs could change, so WLC-1 will not always be the first entry or have the same name. Is it possible to make a dynamic table like the one below? Thank you. WLC-1 SSID1: 2 SSID2: 4   WLC-2 SSID1: 16 SSID3: 8   WLC-3 SSID2: 6 SSID3: 6 SSID4: 9   { WLC-1: { SSID1: 2 SSID2: 4 } WLC-2: { SSID1: 16 SSID3: 8 } WLC-3: { SSID2: 6 SSID3: 6 SSID4: 9 } }    
Hi, I have installed Splunk UF in E drive on windows server and able to monitor all the logs present in E drive. I have request to monitor Network drive logs present on the same windows server and ... See more...
Hi, I have installed Splunk UF in E drive on windows server and able to monitor all the logs present in E drive. I have request to monitor Network drive logs present on the same windows server and the user has full access to network drive. I have placed monitoring stanza under splunk_home/etc/system/local/inputs.conf( E drive) mentioning the path of the logs present in network drive . But i do not see any logs related to Network drive in Splunk. Is there way to monitor network drive logs when UF is present in E drive on windows server?  
Hi I want to compare the data from 2 days by data type, my expected result is as below, is it possible? Data Type Yesterday Yesterday Count Today Today Count Count Change Rate ... See more...
Hi I want to compare the data from 2 days by data type, my expected result is as below, is it possible? Data Type Yesterday Yesterday Count Today Today Count Count Change Rate A 2023/3/26 15 2023/3/27 18 0.20 B 2023/3/26 20 2023/3/27 19 -0.05 C 2023/3/26 16 2023/3/27 35 1.19 D 2023/3/26 21 2023/3/27 40 0.90 E 2023/3/26 30 2023/3/27 25 -0.17 F 2023/3/26 40 2023/3/27 50 0.25
Hi, I add windows, linux and vmware input to the IT Essentials Work App. I can drilldown to the detail of each windows, linux, vcenter, datastore from Infrastructure Overview Dashboard. Except the ... See more...
Hi, I add windows, linux and vmware input to the IT Essentials Work App. I can drilldown to the detail of each windows, linux, vcenter, datastore from Infrastructure Overview Dashboard. Except the vms, when I click drilldown any vms, it shows blank page. How do I solve this issue?   Thank you
Hello, I'm using the MISP42app for which i receive a lot of events from custom command that query the MISP API. All that events are retrieve from search query like this one `| mispgetioc field1=x... See more...
Hello, I'm using the MISP42app for which i receive a lot of events from custom command that query the MISP API. All that events are retrieve from search query like this one `| mispgetioc field1=xxx field2=yyyy filed3=uuu`(command) I've create a new index called misp where i would like to put the events that i retrieve from the search. For this i pipe the previous command with  collect command like this | mispgetioc ... | collect index=misp. When i go on index view i can see that my index is populated  with events, so it means it works (from what i understand): (URL:  http://localhost:9000/en-US/manager/misp42splunk/data/indexes#) But unfortunately when in the search URL:  http://localhost:9000/en-US/app/search/search i tap index=misp no events comes up:
Hello Community, Now that I have managed to map up the logs from my UF forwarding logs to the HF and then seeing it all landing well on the IDx. My question is where I can see the passing raw log... See more...
Hello Community, Now that I have managed to map up the logs from my UF forwarding logs to the HF and then seeing it all landing well on the IDx. My question is where I can see the passing raw logs on the HF? The main idea is that there will be no indexing done on the HF. The raw logs will be parsed and then send to the IDx for indexing. I do not have an index where I can apply rules on logs. How to use the HF UI or anything like props and transforms? If the latter how can I know the format of the raw logs on the HF to be able to apply proper filter on? Thanks All.
Hello, I'm trying to use btool command to investigate the configurations under the new app you created. Please help. 
Hi So my organization uses Splunk Enterprise and I have just started learning. So I just needed to ask a question that I need to add aorund 4000+ Servers in the Splunk Enterprise so that my team can... See more...
Hi So my organization uses Splunk Enterprise and I have just started learning. So I just needed to ask a question that I need to add aorund 4000+ Servers in the Splunk Enterprise so that my team can view some crucial metrics and data along with reports such as Reboot, CPU/Memory Usage, Drive Alert and all the other crucial data in a single frame. So is it technically possible and if yes how. They are all in different regions and they are in different environments such as Production, Corporate, Stage, Development, etc,. Anyone can reach out to me at smit.agasti10@gmail.com . It would be great if someone could help and be mindful I am a total rookie .