All Topics

Top

All Topics

Hello, for testing purposes at home I deployed splunk in docker following https://splunk.github.io/docker-splunk/.  Splunk Enterprise and UF works flawlessly but I would like to get logs from my wi... See more...
Hello, for testing purposes at home I deployed splunk in docker following https://splunk.github.io/docker-splunk/.  Splunk Enterprise and UF works flawlessly but I would like to get logs from my windows 10 machine into the splunk docker instance.  Containers are in same network in docker but the host network is different so the windows host UF cant reach Splunk Enterprise. When switching the containers network to host, it doest work at all.  I am definitely missing something here. Is it event possible to be sending data from host to a docket container real time as i would like to? My host is windows 10 running Docker with 2 containers - Splunk UF and Splunk Enterprise. Thank you.
looHi everybody,  i hope you can help me with my pb. i want add fields in a lookup with a request that dont use index .. We dont have result so i use the fillnull option en appendpipe to create re... See more...
looHi everybody,  i hope you can help me with my pb. i want add fields in a lookup with a request that dont use index .. We dont have result so i use the fillnull option en appendpipe to create result but they don't want add the bnew fields in a lookup.. the KV store fields are fixed and defined in transforms.conf and collections.conf. for example :   | table key,Category,activation,target,tester,url |fillnull | appendpipe [ stats count | eval Category = "HOST Blacklist" | eval activation = "09/15/21" | eval target = "Un test ajout" | eval url = "http://www.test.html" | eval tester = "*test.html*" | eval key=Category.tester.target | where count==0] | fields - count | table key,Category,activation,target,tester,url | outputlookup t_lookup append=True override_if_empty=false key_field=key i see my event in search interface but not in my lookup .. have you an idea for adding field like this?? thanks for your help  
Hi everyone, I want to monitor files on a Linux server. Every hours (at minute 59), a file DATE.log is compressed into a DATE.gz. Though inputs.conf I am monitoring all the files (DATE*). I noticed... See more...
Hi everyone, I want to monitor files on a Linux server. Every hours (at minute 59), a file DATE.log is compressed into a DATE.gz. Though inputs.conf I am monitoring all the files (DATE*). I noticed that I have some logs missing for 20 minutes (from ~ the minute 37 to the minute 59) every hours between 8am to 8pm. I checked the splunkd.log and saw this error: WARN TailReader - Insufficient permissions to read file='.../DATE.gz' (hint: Permission denied) I gave reading rights on the .gz files, but maybe it's not enough as the decompression is effective on the forwarder. Should I give writing rights to my splunk user on these files? Not sure if it's gonna fix the missing logs problem but I will start with that ^^ Have a good day,
Hi All, To forward data to third-party systems, integrated  splunk agent with below configs.  Third party is able to receive data by listening on TCP port. Issue: Unable to view default internal f... See more...
Hi All, To forward data to third-party systems, integrated  splunk agent with below configs.  Third party is able to receive data by listening on TCP port. Issue: Unable to view default internal fields like source or host required for data enrichment.                Tried adding host and source in inputs.conf, but no luck. Is there any limitation for forwarding internal fields to third party systems? inputs.conf [blacklist:$SPLUNK_HOME/var/log/splunk] [monitor:///tmp/test1.log] _TCP_ROUTING = App1Group   outputs.conf [tcpout:App1Group] server=<ip address>:<port> sendCookedData = false
- name: splunk jobid receive api call uri: url: https://{{ fis_apiBaseurl }}/services/search/jobs method: POST validate_certs: false timeout: 360 force_basic_auth: yes status_code: 201, 200 , ... See more...
- name: splunk jobid receive api call uri: url: https://{{ fis_apiBaseurl }}/services/search/jobs method: POST validate_certs: false timeout: 360 force_basic_auth: yes status_code: 201, 200 , 204 headers: Accept: application/json Content-Type: "application/json" Authorization: " Bearer {{ fis_splunk_console_auth }}" return_content: true body: '{search = {{ data }}}' #body_format: json register: get_JobID delegate_to: 127.0.0.1       I am using the above code and URL is url: https://{{ fis_apiBaseurl }}/services/search/jobs  .   I am using this job is ansible tower but i am getting content as blank where as return code is 200 .   ok: [100.73.4.110] => { "get_JobID": { "cache_control": "no-store, no-cache, must-revalidate, max-age=0", "changed": false, "connection": "Close", "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . …   Can someone help me with URL or exact query     
Hello 10-09-2021 00:30:50.477 +0000 ERROR PersistentScript - From {/opt/splunk/bin/python /opt/splunk/lib/python2.7/site-packages/splunk/persistconn/appserver.py}: /opt/splunk/etc/apps/splunk_ta_o36... See more...
Hello 10-09-2021 00:30:50.477 +0000 ERROR PersistentScript - From {/opt/splunk/bin/python /opt/splunk/lib/python2.7/site-packages/splunk/persistconn/appserver.py}: /opt/splunk/etc/apps/splunk_ta_o365/bin/3rdparty/urllib3/connectionpool.py:846: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings  10-09-2021 00:30:50.477 +0000 ERROR PersistentScript - From {/opt/splunk/bin/python /opt/splunk/lib/python2.7/site-packages/splunk/persistconn/appserver.py}: InsecureRequestWarning)
Hallo, can you specify what is the difference between a Qualified partner and Associated partner? We are running splunk since 2015 and we have to renew our support. Best regards
Hello all,   Have been trying to extract the values through an inconsistent data as below. Highlighted values needs to be extracted, however the default extrcation through the tool is not working a... See more...
Hello all,   Have been trying to extract the values through an inconsistent data as below. Highlighted values needs to be extracted, however the default extrcation through the tool is not working as expected. Could you please help in this. 50271234,00004105,00000000,1600,"20210901225500","20210901225500",4,-1,-1,"SYSTEM","","psd217",46769359,"MS932","KAVS0263-I \x83W\x83\x87\x83u(AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX/V9B01_B:@5V689)\x82\xF0\x8AJ\x8En\x82\xB5\x82܂\xB7(host: UXC510, JOBID: 56620)","Information","jp1admin","/Example/JP1/AJS2","JOB","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX/V9B01_B","JOBNET","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds/04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX/V9B01_B","START","20210901225500","","",16,"A0","AJSROOT1:/\x90V\x8A_\x96{\x94ԏ\x88\x97\x9D/\x92l\x8ED\x94\xAD\x8Ds","A1","04_\x92l\x8ED\x8Ew\x8E\xA6\x83f\x81[\x83^\x98A\x8Cg_\x8CߑO1TAX","A2","V9B01_B","A3","@5V0689","ACTION_VERSION","0600","B0","n","B1","1","B2","jp1admin","B3","psd217","C0","UXC510","C1","","C6","r","H2","677758","H3","j","H4","q","PLATFORM","NT",     50270531,00003A71,00000000,3588,"20210901224800","20210901224800",4,-1,-1,"SYSTEM","","PSC611",565029,"MS932","""SE_LOG: User - GetStringS: Failed to LookupAccountName. Domain (\\SERVER-retailing.group), Name (SERVER\QVADMIN). System Error""","Error","","/Example/JP1/NTEVENT_LOGTRAP/QlikViewServer","LOGFILE","NTEVENTLOG","LOGFILE","NTEVENTLOG","","","","","",9,"A0","1630504080","A1","PSC611.H2O-retailing.group","A2","Application","A3","Error","A4","None","A5","300","A6","N/A","PLATFORM","NT","PPNAME","/HITACHI/JP1/NTEVENT_LOGTRAP",   Highlighted fields are the ones I am trying to extract here.   Thank you
Hello Splunkers, Is it possible to hide Splunk Icon and also the App dropdown, from top left for a particular user role?     TIA,  
My requirement is something like this: Lookup 1 looks like this Name | Avg_Count A          | 3 B          |  7 D          | 8 F           | 5 Lookup 2 looks like this: Name | Current_Count ... See more...
My requirement is something like this: Lookup 1 looks like this Name | Avg_Count A          | 3 B          |  7 D          | 8 F           | 5 Lookup 2 looks like this: Name | Current_Count A          | 2 C          | 4 D          | 6 In the search, I input both these lookups and want results like this: Name | Avg_Count | Current_count A          | 3                     | 2 B          | 7                     | 0/null (0 preferred) D          | 8                     | 6 F          | 5                      | 0 C         | 0                      | 4 I have tried join/append/appendcols but all these have their limitations and won't give the intended results. Also looked at many solutions from community but couldn't find one. Thanks in advance! Shaquib
please someone guide me.
I have a timechart from the command timechart span=1d count by skill1 which looks like this: _time VALUE1 VALUE2 VALUE3 VALUE4 2021-09-15 77 243 17 28 2021-09-16 80 104 65 22 ... See more...
I have a timechart from the command timechart span=1d count by skill1 which looks like this: _time VALUE1 VALUE2 VALUE3 VALUE4 2021-09-15 77 243 17 28 2021-09-16 80 104 65 22   And another timechart from the command timechart span=1d count by skill2 which looks like this: _time VALUE1 VALUE2 VALUE3 VALUE4 2021-09-15 70 200 10 12 2021-09-16 56 87 54 11   I want to create a new timechart which should have skill1's values - skill2's values. As follows: _time VALUE1 VALUE2 VALUE3 VALUE4 2021-09-15 7 43 7 16 2021-09-16 14 17 11 11   I tried using the command:   timechart span=1d count by skill1-skill2   But it won't work.   Any suggestions on how to create the new timechart?
Hello  "Good Day"   How to add the progress bar inside the cell in dashoard.i need the dashboard panel format in the below way I want to show the progress bar inside the table format panel.... See more...
Hello  "Good Day"   How to add the progress bar inside the cell in dashoard.i need the dashboard panel format in the below way I want to show the progress bar inside the table format panel.Could please help me out with this?
Hi to whomever find this The incident management review settings has repeated events What I did? I purpose logged in with the wrong information to some device, but I only did it once. Howver, the ... See more...
Hi to whomever find this The incident management review settings has repeated events What I did? I purpose logged in with the wrong information to some device, but I only did it once. Howver, the result shown below is generated twice in the Incident Management Review My settings for this correlation search Also, the things I have specified, such as: Severity Default Status Recommended Actions were not shown whenever the event is generated Result   Settings      
Hi. I know a lookup file can contain wildcards and use them with the WILDCARD(<field>) setting, but is it possible to do the opposite, where the wildcard is in the dataset, rather then the lookup? F... See more...
Hi. I know a lookup file can contain wildcards and use them with the WILDCARD(<field>) setting, but is it possible to do the opposite, where the wildcard is in the dataset, rather then the lookup? For more detail: I have a lookup file that contains names and some location information, formatted like this: lastname, firstname, m [us-us]   I have some source data that contains names, without the middle initial or the location data. I can manipulate it to get it into lastname, firstname format.  Is it possible to do wildcard matching to a lookup when the wildcard is in the data set, as opposed to the lookup file?  So, if my data has a field called name as Doe, John* and the lookup file has fullname: Doe, John M [us-us], it would be a match? | lookup myfile fullname as name  I appreciate any help. I tried looking around for this, but only found references to the wildcard in the lookup, not in the dataset.
Hi  I have got this log where it shows how much time it takes to load investor page in millisecond(ms) 2021-09-15 13:40:12,005 {c0cf807e-ee8b-4bd7-bf10-b586302ce001} XYZ/Online/0659251190 END [/inv... See more...
Hi  I have got this log where it shows how much time it takes to load investor page in millisecond(ms) 2021-09-15 13:40:12,005 {c0cf807e-ee8b-4bd7-bf10-b586302ce001} XYZ/Online/0659251190 END [/investor/load.htm] (5498) - 3312ms   I want to create a timeline chart to show how string "END [/investor/load.htm]" takes time to load at different period. I have got timepicker so I can get but how to show timeline for this string. SPL like index=prd_applog OR index=prd_middleware) appid::a0061f sourcetype="btsfl:bti:audit"| search "END [/investor/load.htm]" | timechart span=1m  <then something to be added here like regex to give that timeline>   Thanks  
Hello I have 3 sets of data and I want to join them all but they don't have the same common field, the trouble I'm having is linking table 2 to table 3. Table 1 host, ip Table 2 host, ip, use... See more...
Hello I have 3 sets of data and I want to join them all but they don't have the same common field, the trouble I'm having is linking table 2 to table 3. Table 1 host, ip Table 2 host, ip, user Table 3  user, location
Before creating my own set of knowledge objects to get information on user activity, especially around searches, I decided to see what else was out there.  I stumbled across the Search Activity app w... See more...
Before creating my own set of knowledge objects to get information on user activity, especially around searches, I decided to see what else was out there.  I stumbled across the Search Activity app which seemingly has pretty much everything I am looking for.  However, it isn't working in my Splunk Enterprise 8.2 environment.  Most dashboards don't populate data and it will not use the SA-ldapsearch add-on that is installed, configured, and working properly.  My guess is that the app is no longer supported (no updates since 2019).  Is anyone successfully using the app in Splunk 8.x? Any other recommendations for a similar app that may exist?  The only other thing I found was the User Monitoring for Splunk app, which  has some of the things I am looking for.  The data reported doesn't seem to be complete, which may just require some tweaking.  Curious what others may be using, if anything, to gain insight into Splunk user activity, especially as it pertains to user search behavior.
Hello All Just got a job with Splunk inheritance, no knowledge about Splunk I could say I'm in the category Splunk for Dummy. what I know is we have Splunk Enterprise Universal forward install on... See more...
Hello All Just got a job with Splunk inheritance, no knowledge about Splunk I could say I'm in the category Splunk for Dummy. what I know is we have Splunk Enterprise Universal forward install on domain controller and other important servers as well.  Could someone assistance me creating alerts for the following Excessive Login Failures Account Added to Security Enabled Group Event Logs Cleared Detect Excessive Account Lockouts from Endpoint  Short Lived Windows Accounts Windows User Account Created/Deleted Unclean Malware Detected Disk Utilization Over 95% thank you very much in advance.
I am trying to get our Add-on that was developed for standalone Splunk to work in a SHC environment. The Add-on takes input from the user in a setup view and saves the configuration values via custo... See more...
I am trying to get our Add-on that was developed for standalone Splunk to work in a SHC environment. The Add-on takes input from the user in a setup view and saves the configuration values via custom endpoint using the Splunk JS SDK. When Set up is run on a standalone instance we get custom fields from the system we are connecting to and create the modular alert html using the custom REST endpoint (also stored in /data/ui/alert/sa_myapp.html). Is there a way to replicate the modular alert html across the search had cluster members if running Setup from the Deployer? As far as I can tell the Setup needs to be run on each search head member to generate the html for that node and this conflicts with SHC best practices with Setup run only on the deployer and pushing the conf files to the SHC members. Setup may need to be rerun for the Add-on if custom fields are added or deleted in the system we are connecting to, to change the html used for mapping the fields between Splunk and our system. Is there a solution so that Setup can only be run on the deployer? How can I replicate the html across the cluster members? In my investigation the file /data/ui/alert/sa_myapp.html is not replicated across the search heads. If Setup is run on each search head cluster member the html is generated. It is my understanding that Setup should not be run on the SHC members but only on the deployer.  Can Setup run on the deployer post to the custom endpoint on each SHC member?