All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i've followed the documentation and also some examples on here but for some reason I cant seem to get these to extract here is an example of the log xxx localhost 9997 8003 test test endRequest 2... See more...
i've followed the documentation and also some examples on here but for some reason I cant seem to get these to extract here is an example of the log xxx localhost 9997 8003 test test endRequest 2266 2022-11-17T08:08:06.617 2022-11-17T08:08:06.640 23 0 - OK - - DESC EXTENDED VIEW test_data_imp DESC - Denodo-Scheduler JDBC 127.0.0.1 - - the props are as follows [denodo-vdp-queries] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true REPORT-denodo-vdp-queries-fields = REPORT-denodo-vdp-queries-fields the transforms are as follows [REPORT-denodo-vdp-queries-fields] DELIMS = "\t" FIELDS = "server_name","host","port","id","database","username","notification_type","sessionID","start_time","end_time","duration","waiting_time","num_rows","state","completed","cache","query","request_type","elements","user_agent","access_interface","client_ip","transaction_id","web_service_name"   i've pushed the app to the forwarders that sending in the data and its in the right sourcetype, i've also pushed the app across the SH cluster, however none of the fields are extracted, am i missing a step?  
Hi Folks; Has anyone had any luck with the new built in "Token Refresh Check" alert that comes with the CrowdStrike Falcon Event Streams TA (version 2.0.9+). This is now part of the TA to restart in... See more...
Hi Folks; Has anyone had any luck with the new built in "Token Refresh Check" alert that comes with the CrowdStrike Falcon Event Streams TA (version 2.0.9+). This is now part of the TA to restart inputs if they become blocked / unstable (less than 2 events in an hour). We can prove the alert is triggering as we are getting emailed alerts but it doesn't seem to be restarting the inputs if no events are seen in the timeframe, so were having to still manually disable / enable the inputs. As far as we can tell everything is configured correctly. Anyone have any luck with the alert? Cheers
Hi, Below is an extract of the log data that I'd like to present in my dashboard. The purpose is to get a display of the total additions and total errors for each file - Activity.txt and Activit_XY... See more...
Hi, Below is an extract of the log data that I'd like to present in my dashboard. The purpose is to get a display of the total additions and total errors for each file - Activity.txt and Activit_XYZ.txt. 11/4/2022 7:30:00 AM Processing Task t1. Searching for D:\Box\FIL\Import\Activity\*.txt 11/4/2022 7:30:00 AM Processing D:\Box\FIL\Import\Activity\Activity.txt 11/4/2022 7:30:00 AM Deleted D:\Box\FIL\Import\Activity\Activity.txt 11/4/2022 7:30:00 AM Total Attempted Add's: 7 11/4/2022 7:30:00 AM Total Additions: 7 11/4/2022 7:30:00 AM Total Errors during Add: 0 11/4/2022 7:30:00 AM Last Transaction: 0 11/4/2022 7:30:00 AM Processing D:\Box\FIL\Import\Activity\Activity_XYZ.txt 11/4/2022 7:30:00 AM Deleted D:\Box\FIL\Import\Activity\Activity_XYZ.txt 11/4/2022 7:30:00 AM Total Attempted Add's: 17 11/4/2022 7:30:00 AM Total Additions: 17 11/4/2022 7:30:00 AM Total Errors during Add: 0 11/4/2022 7:30:00 AM Last Transaction: 0 I've created a script but this only displays the first instance of Total Additions and Total Errors, could you help me to display the second instance of the data. To display Total Additions I use the script below; host="VMXXX12" source="D:\\Box\\FIL\\Logs\\Pbsa\\Import Activity.log" "Total Additions: " | rex "Total Additions: \s*(?<Total_Additions>.+)\s*" | fields Total_Additions | head 1 | eval range=if(Total_Additions=="0", "severe", "low") To display Total Errors I use the script below; host="VMXXX12" source="D:\\Box\\FIL\\Logs\\Pbsa\\Import Activity.log" "Total Errors during Add: " | rex "Total Errors during Add: \s*(?<Total_Error>.+)\s*" | fields Total_Error | head 1 | eval range=if(Total_Error=="0", "low", "severe") Thanks!
We have the add on installed, is there any way to exclude a specific types of events from indexing ?
Hi Team, I am new here and would like to find a way to tackle this problem. I have structured json events that I am able to push to http event collector and create dashboards. However, if I save the... See more...
Hi Team, I am new here and would like to find a way to tackle this problem. I have structured json events that I am able to push to http event collector and create dashboards. However, if I save the same json event data to a logfile and use the forwarder then Splunk is unable to extract the fields.  My sample json event is below.  {"time":1668673601179, "host":"SAG-13X8573", "event": {"correlationid":"11223361", "name":"API Start", "apiName":"StatementsAPI", "apiOperation":"getStatements", "method":"GET", "requestHeaders":   {"Accept":"application/json",   "Content-Type":"application/json"},   "pathParams":   {"customerID":"11223344"},  "esbReqHeaders": {"Accept":"application/json"} } } if I post this to http event collector I am able to see the fields correctly like below.  If I save the same json data to a log file and forwarder sends this data to Splunk, it couldn't parse the data properly. All I see is like below.   The event fields are not extracted properly including the timestamp.  Should I format the json data in any other way before writing it to log file? Or any other configurations need to be done to make it work? Pls let me know. Thank you
i am trying to create a custom field like host and source by making changes in atteched  photos of entrypoint.sh and input.conf. when I deployed with this changes it does not create field for me with... See more...
i am trying to create a custom field like host and source by making changes in atteched  photos of entrypoint.sh and input.conf. when I deployed with this changes it does not create field for me with name task . Please check i am on right path. If did i do anything wrong let me know 
Hi Splunkers, I have two lookups where having a common field "values" For example: lookup 1     lookup 2 values           values a                       a b                       e c      ... See more...
Hi Splunkers, I have two lookups where having a common field "values" For example: lookup 1     lookup 2 values           values a                       a b                       e c                       f d                       g I need to compare these two lookups and get the values that are not in the lookup2 (b,c,d). I'm using below query but it's not working. Please help. TIA |inputlookup lookup1.csv |stats count by "values" |search NOT [|inputlookup lookup2.csv |fields "values" | fields - count ]
Hi team  I have created a user and set up capabilities however I haven't checked any delete in capabilities. When I checked with user console able to see the delete option. Please refer to below ... See more...
Hi team  I have created a user and set up capabilities however I haven't checked any delete in capabilities. When I checked with user console able to see the delete option. Please refer to below screenshot. Even I tried unchecking can_delete option for alert with admin access but still it is not working. Please suggest .
Hello! I currently have this eval in a search of mine:   | eval exists=if(like(_raw, "%xa recovery%"), 0, 1)   Is there any way to set the variable exists to 0 until a specific eve... See more...
Hello! I currently have this eval in a search of mine:   | eval exists=if(like(_raw, "%xa recovery%"), 0, 1)   Is there any way to set the variable exists to 0 until a specific event comes up? What I'm trying to accomplish is like this... If event contains(xa recovery) exists=0 until event contains(System READY) then exists=1. Thank you!
I created a custom regex to filter on a numeric value called "window size" which varies from positive to negative, and I want to display hosts by IP. Trying to figure out the best command (chart,stat... See more...
I created a custom regex to filter on a numeric value called "window size" which varies from positive to negative, and I want to display hosts by IP. Trying to figure out the best command (chart,stats) etc. I really want it to have hosts all on a line graph and their unique window sizes...  I'm not sure if I have to use trellis to accomplish this, but I was hoping to make each line a host IP address and possibly have the x axis represent the window sizes available with the up/down spikes in window sizes being demonstrated. I already have my two fields, just cant figure out how to display the data correctly in a visualization. NOTE: Whenever I do "chart count" this kind of gets in my way because count takes up a value and I really don't know how to format it... I need hosts to "dip up and down" with values    Thanks in advance!
Sample event   { durationMs: 83 properties: { url: https://mywebsite/v1/organization/41547/buildings } correlationId: e581d476-fa5f-4023-a53e-53d6e06734ae }   ... See more...
Sample event   { durationMs: 83 properties: { url: https://mywebsite/v1/organization/41547/buildings } correlationId: e581d476-fa5f-4023-a53e-53d6e06734ae }   I want to replace the ids into https://mywebsite/v1/organization/{id}/buildings I tried {base search string} | eval endpoint=replace(properties.url, "\d+", "{id}") | stats by endpoint  This return no result, but if I try other coorelationId field on the root level, {base search string} | eval endpoint=replace(coorelationId, "\d+", "{id}") | stats by endpoint   This return what I expected endpoint                   |   (other fields) adb{id}f{id}-{id}fd{id}-{id}-a{id}b-{id}c{id}f{id}d | (other fields) aea{id}e{id}c-fcdc-{id}-a{id}-{id}a{id}bfe{id}ee{id} | (other fields) Why replace doesn't work on nested field?
I'm trying to setup splunk on our network.  We must use a proxy to access the internet.  I've set (I've tried with and without sslVersions): [sslConfig] sslRootCAPath = /etc/pki/tls/cert.pem ss... See more...
I'm trying to setup splunk on our network.  We must use a proxy to access the internet.  I've set (I've tried with and without sslVersions): [sslConfig] sslRootCAPath = /etc/pki/tls/cert.pem sslVersions = tls1.2 [applicationsManagement] sslVersions = tls1.2 [proxyConfig] http_proxy = http://PROXY:8080 https_proxy = http://PROXY:3128 no_proxy = 127.0.0.0/8,::1,localhost,10.0.0.0/8,192.168.0.0/16,.nwra.com splunkd reports: 11-16-2022 11:36:34.092 -0800 ERROR HttpClientRequest [50124 TcpChannelThread] - HTTP client error =error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol while accessing server=http:/ /PROXY:3128 for request=https://cdn.splunkbase.splunk.com/media/private/signed_42 40_20873_1668244830.tgz?response-content-disposition=attachment%3Bfilename%3D%22infosec-app-for-sp lunk_170.tgz%22&Expires=1668628893&Signature=Ks6QSvwm3FOjimXq42aW-xSdBeysPA1gYrQlQu0Urpf-R7XfnVyQn F8ChIlT4blEJ38jq-1Iy9vYopkI5MvZoccqJLsbv~fe8peAxgIDHABo0kGLacXoXgiYEE5MGxMmBlBcvA54dwr4xqdmo69zxl6 FhfGxHBfi6KUAZ6zgrv0RlZNz7uQR95cmTpjPbtwlDDbw8IeUE4~NEDnNhRwAqD3mKiSHhfGYEgDF5kQMEHgkm2csRMyJ7i4qR MscF~dUeqjvrN0P1W~NfL8vykYTHWMXqoeY1OVFliRXzfhqjwcCw8GtQgCcTWT7WOrHLfhZNJR-nJ9kf786SLqgNVQUXA__&Ke y-Pair-Id=K3GLBBC7R7U34X.   I can download that URL fine from the machine directly: https_proxy=http://PROXY:3128 curl 'https://cdn.splunkb ase.splunk.com/media/private/signed_4240_20873_1668244830.tgz?response-content-disposition=attachm ent%3Bfilename%3D%22infosec-app-for-splunk_170.tgz%22&Expires=1668627891&Signature=aA-kU~xxaEcPSU~ A3fY4tPEY2mzdfDNN-T4I~RF3bEFfqJB8u2-K7ia8IEMP~uqxqWQhGCKr2oBRC3qQqdsa2-vwz8yzvNgIPcwI5VFEjjFBs1yZu -0k91sOjFgbiCx3z2FetbSm2K05FOCCN2GCxrJacpjSCz9kPJdFrnsZRDgrdX9vHsC62Fn60OWt0IgRS3qoXKdHHWXct5-RFUc iKoOFWX8Hdp4ZGXe~xx3UGhqkonqV-ZE~Nt34beC~J5SGdvTS8mZcr7bZKL9M4fefGRtHiVzdK8ffuqCe5Fsthoyyl8OHr4MJy TptHLcwZKJhthqee80hyrlPYyGVgiEeyQ__&Key-Pair-Id=K3GLBBC7R7U34X' -o /tmp/out  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                 Dload  Upload   Total   Spent    Left  Speed 100   114  100   114    0     0    139      0 --:--:-- --:--:-- --:--:--   139  Both the splunk server and the proxy are running EL 8.7.
How can I run a report against all the servers with the machine agent installed but not linked to an application?
I'm working on doing some troubleshooting as to why the Splunk Add-On is not ingesting data into our Splunk Cloud environment (version 9.0), and I noticed that we have received a warning stating that... See more...
I'm working on doing some troubleshooting as to why the Splunk Add-On is not ingesting data into our Splunk Cloud environment (version 9.0), and I noticed that we have received a warning stating that the add-on is not compatible with Python 3.  We followed Splunk's documentation (Link: Splunk Documentation ) and we are still not having any success.  As a side note, we were unable to locate the permission below, however I'm not 100% sure that is the entire cause: ReportingWebService.Read.All - Read Message Trace data - Microsoft Reporting WebService Would the incompatibility with Python 3 be cause issues with ingesting data?
Hi all, I'm attempting to develop a regex that will pick up on a value contained in [ ] brackets (see below): Log value year number time:time:time 00 AAA0 Blah Blah Blah Blah Blah: [X] to [Y] (... See more...
Hi all, I'm attempting to develop a regex that will pick up on a value contained in [ ] brackets (see below): Log value year number time:time:time 00 AAA0 Blah Blah Blah Blah Blah: [X] to [Y] (4 possible variables X,Y,A,B)   I need to alert every time the * to [ bracketed value] changes. Trying to make a regex to pick out these bracketed values. Any help is appreciated!  
Hello Guys! Is my first post so sorry if the title is not as specific as it should be Look, we have an order tracking report here The first status is label created at 10:02   Later, a ... See more...
Hello Guys! Is my first post so sorry if the title is not as specific as it should be Look, we have an order tracking report here The first status is label created at 10:02   Later, a new status "arrived_at_facility" is added, and even tough that's the latest one. "Label_created" is superimposed    And this continues on and on, the tracking statuses are arriving as normal, but label_created is continued being moved as the latest one. So our tracking report always takes "label_created" as the latest status, instead of something else as "in_transit"  Any ideas of what could be wrong with our logs? Thanks in advance guys. Any additional info you can need, ask away  
Hello Splunkers, We have ran into several issues primarily with getting data into Splunk over HTTP Collectors. It appears that we need to update our cert with one that has a root ca that has been ap... See more...
Hello Splunkers, We have ran into several issues primarily with getting data into Splunk over HTTP Collectors. It appears that we need to update our cert with one that has a root ca that has been applied to our Splunk instance instead of a self-signed certificate. We are trying to determine what impact updating the cert across our entire environment could have.  After adding a cert to splunk web does not push down the the HTTP collectors. They were still using the self-signed certificate. So it appears adding a new certificate to the cluster is required. This will be my first time updating the certificate across the entire environment so feel free to provide any advice or doc pages that could assist. Documentation we are currently using: https://docs.splunk.com/Documentation/Splunk/9.0.2/Security/ConfigureandinstallcertificatesforLogObserver
Trying to get these UUID/GUIDs to extract from the message field. Hoping to create a rex to extract everything after 'fieldx: ' in the 8-4-4-4-12 character window separated by each , after that. Ive ... See more...
Trying to get these UUID/GUIDs to extract from the message field. Hoping to create a rex to extract everything after 'fieldx: ' in the 8-4-4-4-12 character window separated by each , after that. Ive tried the "extract new fields " but there are well over 120 of these things and splunk doesnt like selecting all of that and filtering keeps throwing errors. And would rather not have to do this one by one.  These are embedded in the message field as stated earlier. Id like to make a new field with the rex if possible and name it "fieldx" Any and all help is welcome.  "message: Filtered marking ids for DAC property 'fieldx': abc12345-b123-c456-d789-123abx789edc, de14fc5e-22av-87dd-65d9-7563a7pleqw3, "(<----there are about 120 more in a row of these) Thanks in advance    
Hi, is it possible to add a task in a phase of a workbook in a particular container via an api call? thanks for the help.