All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm doing  an on-premise Splunk Enterprise proof of concept deployment - its mostly successful but I'm encountering one issue with the Windows Infrastructure add-on and am not sure what I'm miss... See more...
Hi, I'm doing  an on-premise Splunk Enterprise proof of concept deployment - its mostly successful but I'm encountering one issue with the Windows Infrastructure add-on and am not sure what I'm missing. I'm hoping y'all can help point me in the right direction. Thanks in advance. My current setup: Instances: SearchHead HeavyForwarder Indexer Manager (Licensing/Apps/ForwardManager) I've made the following changes: Installed Splunk Add-on for Windows v8.0.0 (configured) Installed Splunk Supporting Add-on for Active Directory v3.0.1  (configured) Installed Splunk App for Windows Infrastructure v.2.0.1 (incomplete) Configured Active Directory auditing I've enabled most stanzas in the inputs.conf, but left DNS, Perfmon, PrintMon, WindowsUpdates disabled as they're outside the scope of what we want. I've created MSAD, perfmon, windows, windowseventlog indexes - I can see the events populating in those indexes and not Main. The SearchHead is configured and able to search logs on the indexer. On the SearchHead, when I search for: "index=msad sourcetype=activedirectory", I get a thousand results+ for index=msad, source=ActiveDirectory, sourcetype=ActiveDirectory. So everything looks kosher. I can see AD user events like account locks etc, sa well. When I run the guided setup for Splunk App for Windows Infrastructure on the SearchHead, I get successful results for the Prerequisite checks and Data Checks (except for the expected warnings on PerfMon and PrintMon, but those inputs.conf stanzas are disabled on forwarders). However when I run Customize Features - several features are not found though that I do expect: Windows \ Performance Monitoring (Expected - I've disabled the stanza) Windows \ Applications and Updates (Unsure - I've disabled the WindowsUpdate stanzas) Windows \ Print Monitoring (Expected - I've disabled the stanza) Active Directory \ Domain Controllers (unsure) Active Directory \ DNS  (Expected - I've disabled the stanza) Active Directory \ Users (unsure) Active Directory \ Computers (unsure) Active Directory \ Groups (unsure) I can't locate a precise explanation in documentation (though I'm sure the issue is something simple) of why part of the Domain Controllers, Users, Computers and Group features in Active Directory are not found in the Windows Infrastructure guided setup. And am unsure what I may have missed during initial configuration setup. Any advice, direction or help would be most welcome. Regards, Jon
hi, know why i cant access server database using VPN
Hi,   I have the below base search, index="appv" (sourcetype="AppV-User" *PUT /package*) OR (sourcetype=sql_appv_packageversion) | rex "\/packages\/\w+\/(?<Id>\w+)\s+-\s+\d+\s+(?<User>[^ ]+)" | st... See more...
Hi,   I have the below base search, index="appv" (sourcetype="AppV-User" *PUT /package*) OR (sourcetype=sql_appv_packageversion) | rex "\/packages\/\w+\/(?<Id>\w+)\s+-\s+\d+\s+(?<User>[^ ]+)" | stats dc(sourcetype) AS dc_sourcetype values(Name) AS Name values(User) AS User BY Id | where dc_sourcetype=2 Statistics of this search is as below, Id      dc_sourcetype        Name                  User 323      2                            Putty v0.72          User A I have extracted a field called 'Enabled' which has values of either Enabled=0 or Enabled=1   How do i update my search query so the table is shown as below Id      dc_sourcetype        Name                  User        Enabled 323      2                            Putty v0.72          User A         0 or 1
Hello Everyone, Does anyone know the best way to go about getting logs from our Acronis Backups solution to send it's logs to Splunk to get them ingested?  Also, any apps/addons that would be able t... See more...
Hello Everyone, Does anyone know the best way to go about getting logs from our Acronis Backups solution to send it's logs to Splunk to get them ingested?  Also, any apps/addons that would be able to do the field extractions?
Hello Everyone,   I'm hoping to get some assistance.  My company using WatchGuard Firebox firewalls.  I'm working to get the data correcting ingested into Splunk and get all the fields extracted (H... See more...
Hello Everyone,   I'm hoping to get some assistance.  My company using WatchGuard Firebox firewalls.  I'm working to get the data correcting ingested into Splunk and get all the fields extracted (HIGHLY prefer a CIM complaint format) but the only WatchGuard App and TA Addon I've found are outdated, poorly written (I've been told) and are not CIM compliant.  Is there an easy way to pull the information from the WatchGuard Log Catalog (https://www.watchguard.com/help/docs/fireware/12/en-US/log_catalog/Log-Catalog_v12_5.pdf) and put it into Splunk to properly ingest and label the data coming in from WatchGuard logs?       Thanks for any and all assistance with this!
We are building a new Splunk environment. As we were doing this I noticed that the Windows TA no longer includes a default/indexes.conf file and all the inputs don't specify an index thus all events ... See more...
We are building a new Splunk environment. As we were doing this I noticed that the Windows TA no longer includes a default/indexes.conf file and all the inputs don't specify an index thus all events would go to the main index <yuck>. This kicked off the discussion about what is the best way to handle indexes going forward. Should I create a local/indexes.conf file for each app that does not have one or should I create our own app with an indexes.conf file and just make sure it has the highest precedence so that it would override the indexes.conf file that may come bundled with any app? I can see that having an indexes.conf file in each app makes it easy to see what app that data goes with. But having our own app for handling all the index makes it easier to make adjustments to all the indexes without having to edit multiple files. FYI, we have clustered indexers so I cannot just rely on the web UI for this.
In the Splunk environment some of the assets have variable host names. Is there a way we can map an additional 'host' field from the json data when normalising so that they can match a fixed asset lo... See more...
In the Splunk environment some of the assets have variable host names. Is there a way we can map an additional 'host' field from the json data when normalising so that they can match a fixed asset lookup?
I just installed a fresh enterprise trial version 8.1. since we're using this in a Forescout training lab we definitely need the free version.  I get Splunk running, and I can log in. However I canno... See more...
I just installed a fresh enterprise trial version 8.1. since we're using this in a Forescout training lab we definitely need the free version.  I get Splunk running, and I can log in. However I cannot install any of the Forescout apps for splunk (we're teaching how to integrate our 2 products) and I can't convert to the Free License group.  I've done this many times before, and have not had a problem.  License: I click on Settings > Licensing > Change License group > Select Free License and click save.  After 1 or 2 seconds the display still says I'm using the Enterprise Trial License and simply refuses to change. App Install: I go to Apps > Manage Apps > Install App from file > browse to the file (I've tried forescout app for splunk v 2.9.1 and v2.9.2 both do this) > click open > select 'upgrade app' (or not I've tried it both ways and the same thing happens.) > click upload and it dumps me right back to the Upload an app page. clearly it's not uploaded, installed or anything. Help please! Sue
Hello Splunkers, I'm facing problem with correct parsing json data. Splunk correctly recognizes data as json sourced, but with default settings, it cannot parse data correctly. It creates fields lik... See more...
Hello Splunkers, I'm facing problem with correct parsing json data. Splunk correctly recognizes data as json sourced, but with default settings, it cannot parse data correctly. It creates fields like: 3b629fbf-be6c-4806-8ceb-1e2b196b6277.currentUtilisation or device31.1.127.out::device54.1.87.in.currentUtilisation. As the main field is irregular I don't know how to set line_breaker which is the most likely main cause of the problem. Can I count on your help? A chunk of the input file is below.     { "device31.1.127.out::device54.1.87.in": { "currentUtilisation": 0.0, "enabled": true, "from": "device31.1.127.out", "hasBookings": false, "id": "device31.1.127.out::device54.1.87.in", "isActive": false, "name": "Adam", "props": { "bandwidth": 90000.0, "conflictPri": 1, "description": "" }, "to": "device54.1.87.in", "usage": { "bookings": [], "id": "device31.1.127.out::device54.1.87.in" } }, "device49.1.117.out::device34.1.69.in": { "currentUtilisation": 0.0, "enabled": true, "from": "device49.1.117.out", "hasBookings": false, "id": "device49.1.117.out::device34.1.69.in", "isActive": false, "name": "Barek", "props": { "bandwidth": 90000.0, "conflictPri": 1, "description": "" }, "to": "device34.1.69.in", "usage": { "bookings": [], "id": "device49.1.117.out::device34.1.69.in" } }, "3b629fbf-be6c-4806-8ceb-1e2b196b6277": { "currentUtilisation": 0.0, "enabled": true, "from": "device38.1.93.out", "hasBookings": false, "id": "3b629fbf-be6c-4806-8ceb-1e2b196b6277", "isActive": false, "name": "Cezary", "props": { "bandwidth": 90000.0, "conflictPri": 1, "description": "" }, "to": "device441.1.89.in", "usage": { "bookings": [], "id": "3b629fbf-be6c-4806-8ceb-1e2b196b6277" } }, "87725874-f760-4e37-9421-168506a05573": { "currentUtilisation": 0.0, "enabled": true, "from": "device21.1.75.out", "hasBookings": false, "id": "87725874-f760-4e37-9421-168506a05573", "isActive": false, "name": "Darek", "props": { "bandwidth": 90000.0, "conflictPri": 1, "description": "" }, "to": "device61.1.97.in", "usage": { "bookings": [], "id": "87725874-f760-4e37-9421-168506a05573" } } }    
Just an announcement post to let people know we have published our TA to the community for Trend Micro Deep Security and ApexOne. https://splunkbase.splunk.com/app/5349/ This TA falls under our... See more...
Just an announcement post to let people know we have published our TA to the community for Trend Micro Deep Security and ApexOne. https://splunkbase.splunk.com/app/5349/ This TA falls under our Unified line of TAs and will support as many Trend Micro products as we can. This TA is actually CIM compliant (vs the usual tick box) and built on large datasets. Fully compatible with Splunk Enterprise and Splunk Cloud, built by an Ops team for Ops teams.
Dear All, My question might seem naive and pardon me for that. I want to create an alert for data not being processed. The below was my query. An alert would be triggered, if the number of event ar... See more...
Dear All, My question might seem naive and pardon me for that. I want to create an alert for data not being processed. The below was my query. An alert would be triggered, if the number of event are greater than 5000 or if the number of events are greater than 1000 and the change in events are more than 1000 between 2 hours. Used delta to get the difference. I am able to give the conditions and I am able to alert as well.  However, I wanted to check if we could extract a specific column from a specific row and column number and create another table. For example, please look at below table. I want to create a new table with number of event at 15:00:00 , number of event at 14:00:00 ,  Change in number of events.   Kindly let me know how to achieve the same.
Hi all, I have upgraded our Splunk index cluster from 7.3.0 to 8.1.0 and since then I see the below red message on search head:   The percentage of non high priority searches skipped (50%) over th... See more...
Hi all, I have upgraded our Splunk index cluster from 7.3.0 to 8.1.0 and since then I see the below red message on search head:   The percentage of non high priority searches skipped (50%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=20. Total skipped Searches=10     Do you have any ideas how could I recover from this?  And what is causing it? I took all the steps as described here https://docs.splunk.com/Documentation/Splunk/8.1.0/Installation/AboutupgradingREADTHISFIRST  I have followed this problem as well, but no luck: https://community.splunk.com/t5/Installation/Rolling-upgrade-restart-scheduled-searches-skipped-error/m-p/378662#M5180   Regards, Evang Regards, Evang
Hello, I am trying to find the best way to change my search based on a token value that I will pass through an input. Right now, I have a search that is filtered by a production area. I would like t... See more...
Hello, I am trying to find the best way to change my search based on a token value that I will pass through an input. Right now, I have a search that is filtered by a production area. I would like to be able to in that search, use the sub production area instead if one is selected. Both of these values have a token associated with them. $production_area$ and $sub_production_area$. I couldn't get a conditional in a search to work. I would only like to search based on the sub production area if a value other than the default is selected. The current search limits results by production_area=$production_area$.  I can provide more information if needed. I had trouble wording the question to fully explain what I am looking for. 
Hi guys, I´m trying to integrate Splunk with SAML authentication and as you may know I have to download a file from Azure and another from Splunk and upload them in the other platform. The thing is ... See more...
Hi guys, I´m trying to integrate Splunk with SAML authentication and as you may know I have to download a file from Azure and another from Splunk and upload them in the other platform. The thing is that the file I download from Splunk has the "location" field (which is the hostname IP) separated via "-" and not "." so Azure can´t resolve the hostname. This is the "Location" field:  Also, when I change this value manually it breaks my distributed search configured on m indexers. Does anyone know how to replace this value in order for Azure to resolve the hostname and finish this SAML integration?  
Hi! im traying to extract a field named hostname from checkpoint logs, but i couldn't with the wizards: sample: time=1606760596|hostname=CHKHOST|product=Mobile Access|action=Log In|ifdir=inbound|lo... See more...
Hi! im traying to extract a field named hostname from checkpoint logs, but i couldn't with the wizards: sample: time=1606760596|hostname=CHKHOST|product=Mobile Access|action=Log In|ifdir=inbound|loguid={0x5fc53894,0x0,0x250a000a,0x2e9b}|origin=10.0.X.X|originsicname=CN\=FW01,O\=CHKHOST.localdomain|sequencenum=293|time=1606760596|version=5|auth_encryption_methods=AES-256 + SHA1 + Group 2|auth_method=RADIUS|client_build=986100611|client_name=Endpoint Security VPN|client_version=E81.10|cvpn_category=Session|device_identification={85FAD095-E5AB-43BA-AA8C-B205F783226E}|domain_name=localdomain|event_type=Login|failed_login_factor_num=0|host_ip=192.168.X.X|host_type=PC|hostname=NB-0237|lastupdatetime=1606760596|login_option=Standard|login_timestamp=1606760596|mac_address=40:5b:d8:64:5b:29|methods:=3DES + SHA1|office_mode_ip=10.193.0.89|os_bits=64bit|os_build=18363|os_edition=Enterprise|os_name=Windows|os_version=10|proto=6|proxy_src_ip=0.0.0.0|s_port=0|service=443|session_timeout=43200|session_uid={5FC53894-0000-0000-0A00-0A259B2E0000}|src=181.121.X.X|status=Success|suppressed_logs=0|tunnel_protocol=IPSec|user=agimenez|user_dn=agimenez|user_group=VPN_Group|   if you see there are two hostname fields, one with the checkpoint hostname and the other with the device connecting to the vpn. I need the second value.   In smart mode or verbose mode, splunk only detects the first hostname field. How can i parse the second field? Ive tried field extraction wizard with regular expression only selecting the second hostname and i get this error:   The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within longer text strings.     When i try using delimieters, selecting pipe, i get this error: has exceeded the configured depth_limit, consider raising the value in limits.conf.   any help would be appreciated.
I want to catch from my index=ip the field value ip_address in common in one or more hosts. I want to get something like this: This IP ADDRESS is in common with 3 host  and so have a list or a cha... See more...
I want to catch from my index=ip the field value ip_address in common in one or more hosts. I want to get something like this: This IP ADDRESS is in common with 3 host  and so have a list or a chart where i can see all the ip address in common in the hosts. Don't know  how to get it, thank you in advantage.
HI Splunkers, I'm working on a usecase where I have to report on when plain text passwords being used.  Please help me in sharing some knowledge over this usecase. Logs : Windows & Linux Regar... See more...
HI Splunkers, I'm working on a usecase where I have to report on when plain text passwords being used.  Please help me in sharing some knowledge over this usecase. Logs : Windows & Linux Regards, Revanth.
index=osnixscript sourcetype=cpu host=* | multikv fields pctIdle | eval Percent_CPU_Load = 100 - pctIdle | timechart span=5m avg(Percent_CPU_Load) by host i  have modified above query into below and... See more...
index=osnixscript sourcetype=cpu host=* | multikv fields pctIdle | eval Percent_CPU_Load = 100 - pctIdle | timechart span=5m avg(Percent_CPU_Load) by host i  have modified above query into below and added conditions based on criteria to change the color of the graph.. index=osnixscript sourcetype=cpu host=* | multikv fields pctIdle | eval Percent_CPU_Load = 100 - pctIdle | timechart span=5m avg(Percent_CPU_Load) by host | eval Threshold_Color=case(Percent_CPU_Load>0 AND Percent_CPU_Load>2, "Normal", Percent_CPU_Load>2 AND Percent_CPU_Load <=8, "Warning", Percent_CPU_Load > 8 AND Percent_CPU_Load < 90, "Critical") i have added code in the xml with <option name="charting.fieldColors">{"Normal":0xFF0000,"Warning":0xFFFF00, "Critical":0x73A550}</option> I couldn't be able to see the change in graph colors based on the conditions defined in the query. Can someone please look into the query and correct me with changes required to fix this issue. Thanks in Advance        
Hi ,   To fetch cpu we use splunk_ta_nix app . What is used for target response time?
On a Linux host I am testing our HEC Indexer Acknowledgement setup on our heavy forwarder and following the documentation example but I keep running into "invalid data format" errors. I am running... See more...
On a Linux host I am testing our HEC Indexer Acknowledgement setup on our heavy forwarder and following the documentation example but I keep running into "invalid data format" errors. I am running  the following command to ingest data:   curl https://10.1.10.20:8088/services/collector -H "X-Splunk-Request-Channel: FE0ECFAD-13D5-401B-847D-77833BD77132" -H "Authorization: Splunk 9cedcd53-b32d-43ba-9cb6-25a211c720bc" -d '{ "host": "labPC", "source": "testCurl", "event": { "message": "Did I Make It?", "severity": "INFO"} }' -k    The data is getting indexed and I am receiving the following status code:   {"text":"Success","code":0,"ackId":1}   But when I run the following command to verify the indexing status:   curl -k https://10.1.10.20:8088/services/collector/ack?channel=FE0ECFAD-13D5-401B-847D-77833BD77132 -H "Authorization: Splunk 9cedcd53-b32d-43ba-9cb6-25a211c720bc" -d "{"acks":"0"}"   or any variation of "acks" "ack" "ackId" "0" "[0]" or escaping I keep getting the same result    {"text":"Invalid data format","code":6}   Any help or guidance would be most appreciated.  Thank you.