All Topics

Top

All Topics

Hey, Can someone please help me in building a query for user accessing webpage despite warning sign from proxy? @splunk 
Hi, Why the below two queries giving me different percentage values? I checked the total count and count for Action=Sell is same. Am I missing something here? index=abc source=def | top Action   ... See more...
Hi, Why the below two queries giving me different percentage values? I checked the total count and count for Action=Sell is same. Am I missing something here? index=abc source=def | top Action   This gives me 49.7 % for Action=Buy ================================================ index=abc source=def | stats count as Total | appendcols [ search index=abc source=def | search Action=Buy | stats count as Buy] | eval Percent_Buy=round((Buy/Total)*100,2)   This gives me 27.7 % for Action=Buy
Is the Splunk Software Required for Splunk Training & Certification Exams?
Does anyone know the likelihood of landing an entry-level cybersececurity job with only a Splunk cybersecurity certification & no other cybersecurity nor computer certifications (nor background nor e... See more...
Does anyone know the likelihood of landing an entry-level cybersececurity job with only a Splunk cybersecurity certification & no other cybersecurity nor computer certifications (nor background nor education in IT)?
i have setted the java environment correctly. both $PATH and $JAVA_HOME are accessible. i am able to connect to jmx server with same url,username,password from jconsole. but i am unable to connect fr... See more...
i have setted the java environment correctly. both $PATH and $JAVA_HOME are accessible. i am able to connect to jmx server with same url,username,password from jconsole. but i am unable to connect from splunk addon for jmx. below thing i found from  $SPLUNK_HOME/var/log/splunk/jmx.log 2023-11-26 16:03:54,246 WARNING pid=13767 tid=MainThread file=jmx.py:<module>:86 | Failed to open the invoke the input. Reason: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py", line 82, in <module> java_const.JAVA_MAIN_ARGS File "/opt/splunk/lib/python3.7/subprocess.py", line 800, in __init__ restore_signals, start_new_session) File "/opt/splunk/lib/python3.7/subprocess.py", line 1567, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'java': 'java'
Hello I have this query : index="bigfixreport" | timechart count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 2 AN... See more...
Hello I have this query : index="bigfixreport" | timechart count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 2 AND totalNumberOfPatches <= 5, "Low Exposure", totalNumberOfPatches >= 6 AND totalNumberOfPatches <= 9, "Medium Exposure", totalNumberOfPatches >= 10, "High Exposure", totalNumberOfPatches == 1, "Compliant", 1=1, "<not reported>" ) | eval category=exposure_level | xyseries category exposure_level totalNumberOfPatches   The purpose of this query is to count the number of patches for each computer name and visualize it in pie chart - one for each category and color each pie in different color ("Low Exposure" - blue, "Medium Exposure" - yellow, "High Exposure" - red, "Compliant" - green, <not reported> - gray) I have few problems 1. since i count numbers, <not reported> not count and does not display in the list 2. i have new file every day and it is possible the for few day the number of patches for some computer will be the same (for example, it will be 3 patches for specific computer for 5 days) if i just count the number of patches it will count 3+3+3+3+3 and it is not true since its the same 3 patches
We have migrated the Old SH on new SH on new hardware, where i need to migrate the old Enterprise Security, to new SH which has been upgraded to new version 9.0.1, i need to migrate Enterprise Securi... See more...
We have migrated the Old SH on new SH on new hardware, where i need to migrate the old Enterprise Security, to new SH which has been upgraded to new version 9.0.1, i need to migrate Enterprise Security with all use cases.    
The below Vulnerabilities reported in linux servers and let me know if any impact on Splunk application, if we remediate the vulnerabilities based on solution provided. Amazon Linux 2 : polki... See more...
The below Vulnerabilities reported in linux servers and let me know if any impact on Splunk application, if we remediate the vulnerabilities based on solution provided. Amazon Linux 2 : polkit (ALAS-2022-1745) Amazon Linux 2 : libgcrypt (ALAS-2022-1769) Amazon Linux 2 : gzip, xz (ALAS-2022-1782) Amazon Linux 2 : vim (ALAS-2022-1829) Amazon Linux 2 : zlib (ALAS-2022-1849) Amazon Linux 2 : aide (ALAS-2022-1850) Amazon Linux 2 : pcre2 (ALAS-2022-1871) Amazon Linux 2 : e2fsprogs (ALAS-2022-1884) Amazon Linux 2 : sqlite (ALAS-2023-1911) Amazon Linux 2 : libtasn1 (ALAS-2023-1908) Amazon Linux 2 : freetype (ALAS-2023-1909) Amazon Linux 2 : libpng (ALAS-2023-1904) Amazon Linux 2 : python-lxml (ALAS-2023-1956) Amazon Linux 2 : nss-util (ALAS-2023-1954) Amazon Linux 2 : nss-softokn (ALAS-2023-1955) Amazon Linux 2 : python (ALAS-2023-1980) Amazon Linux 2 : cpio (ALAS-2023-1972) Amazon Linux 2 : curl (ALAS-2023-1986) Amazon Linux 2 : nss (ALAS-2023-1992) Amazon Linux 2 : babel (ALAS-2023-2010) Amazon Linux 2 : systemd (ALAS-2023-2004) Amazon Linux 2 : jasper (ALAS-2023-2018) Amazon Linux 2 : gd (ALAS-2023-2044) Amazon Linux 2 : perl (ALAS-2023-2034) Amazon Linux 2 : libwebp (ALAS-2023-2048) Amazon Linux 2 : mariadb (ALAS-2023-2057) Amazon Linux 2 : sysstat (ALAS-2023-2068) Amazon Linux 2 : rsync (ALAS-2023-2074) Amazon Linux 2 : dnsmasq (ALAS-2023-2069) Amazon Linux 2 : glusterfs (ALAS-2023-2071) Amazon Linux 2 : pcre (ALAS-2023-2082) Amazon Linux 2 : git (ALAS-2023-2072) Amazon Linux 2 : libfastjson (ALAS-2023-2079) Amazon Linux 2 : openldap (ALAS-2023-2095) Amazon Linux 2 : perl-HTTP-Tiny (ALAS-2023-2093) Amazon Linux 2 : glib2 (ALAS-2023-2107) Amazon Linux 2 : perl-Pod-Perldoc (ALAS-2023-2094) Amazon Linux 2 : ncurses (ALAS-2023-2096) Amazon Linux 2 : squashfs-tools (ALAS-2023-2152) Amazon Linux 2 : fribidi (ALAS-2023-2116) Amazon Linux 2 : tcpdump (ALAS-2023-2119) Amazon Linux 2 : libX11 (ALAS-2023-2129) Amazon Linux 2 : c-ares (ALAS-2023-2127) Amazon Linux 2 : zstd (ALAS-2023-2140) Amazon Linux 2 : SDL2 (ALAS-2023-2162) Amazon Linux 2 : bluez (ALAS-2023-2167) Amazon Linux 2 : avahi (ALAS-2023-2175) Amazon Linux 2 : nghttp2 (ALAS-2023-2180) Amazon Linux 2 : ca-certificates (ALAS-2023-2224) Amazon Linux 2 : amazon-ssm-agent (ALAS-2023-2238) Amazon Linux 2 : shadow-utils (ALAS-2023-2247) Amazon Linux 2 : libssh2 (ALAS-2023-2257) Amazon Linux 2 : libjpeg-turbo (ALAS-2023-2254) Amazon Linux 2 : expat (ALAS-2023-2280) Amazon Linux 2 : kernel (ALAS-2023-2264) Amazon Linux 2 : flac (ALAS-2023-2283) Amazon Linux 2 : python-pillow (ALAS-2023-2286) Amazon Linux 2 : bind (ALAS-2023-2273) Oracle Java JRE Unsupported Version Detection (Unix)
Could you please help me to know how to change the mode of alerts from "private" mode to "app" mode?
Hello I have installed the add-on "Alien Vault Check OTX". I would like to know if out of this command where I can query an IP, HASH or domain for indicators of compromise, could someone give me an... See more...
Hello I have installed the add-on "Alien Vault Check OTX". I would like to know if out of this command where I can query an IP, HASH or domain for indicators of compromise, could someone give me an idea if it is possible to associate it for example to the src_ip or dest_ip field of my firewall logs? https://apps.splunk.com/app/5422/#/details  
Hi All, Can someone help me understand which .conf file is responsible to control the connectivity to/from Internet. Wanted to make sure that the so called pure ON Prem Splunk Enterprise solution i... See more...
Hi All, Can someone help me understand which .conf file is responsible to control the connectivity to/from Internet. Wanted to make sure that the so called pure ON Prem Splunk Enterprise solution is unreachable from Internet and most importantly not sending data outside e.g. 1.5 DTI? Thanks in advance.
Hello community, Can anyone please help me understand if the newest vulnerability can exploit a pure on prem Splunk Enterprise clustered solution? Can an arbitrary code be pushed remotely via any m... See more...
Hello community, Can anyone please help me understand if the newest vulnerability can exploit a pure on prem Splunk Enterprise clustered solution? Can an arbitrary code be pushed remotely via any means? Splunk documentation and advisory are not very clear and just saying SE but not mentioning anything about 1.5 DTI and non publicity connected SE instance. Thank you.
I have the following log structure:   2023-11-25T21:18:54.244444  [  info      ]  I am a log message  request = GET /api/myendpoint    request_id = ff223452 I can capture the date and time (without... See more...
I have the following log structure:   2023-11-25T21:18:54.244444  [  info      ]  I am a log message  request = GET /api/myendpoint    request_id = ff223452 I can capture the date and time (without the 244444 part) using: rex field=myfield "(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2})\.\d+" and timestamp is properly captured. But if I try to extend this and want to capture the log level as well with for example: rex field=myfield "(?<timestamp>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2})\.\d+\s+\[\s*(?<loglevel>\w+)\s*\]\s+" It didn't work; none of the timestamp nor the loglevel is captured. What am I doing wrong?
I need help with an employee travel analysis report. I have an index containing information about employee office check-ins in various countries. Events have fields Employee, Time, Country For exam... See more...
I need help with an employee travel analysis report. I have an index containing information about employee office check-ins in various countries. Events have fields Employee, Time, Country For example John Doe, 2023-11-15T20:05:31.000+00:00, France ... John Doe, 2023-11-18T10:00:31.000+00:00, France ... John Doe, 2023-11-20T10:05:31.000+00:00, United States ... John Doe, 2023-11-25T20:05:31.000+00:00, United States   At the end I would like to get the result showing duration in days between first checkin and last checkin per employee per country John Doe, 3d, France John Doe, 5d, United States
Hi Splunkers, I do see 5-6 apps to update in my Splunk cloud, it's asking for restart whenever I'm hovering over update, I don't want to restart it each time, what's the way to update all the apps a... See more...
Hi Splunkers, I do see 5-6 apps to update in my Splunk cloud, it's asking for restart whenever I'm hovering over update, I don't want to restart it each time, what's the way to update all the apps at once and restart once? Thank you! @gcusello 
So the site we use to get 6 month developer license has been down for a few days now. After accepting the T&C's (https://dev.splunk.com/enterprise/dev_license/devlicenseagreement/) and waiting for 30... See more...
So the site we use to get 6 month developer license has been down for a few days now. After accepting the T&C's (https://dev.splunk.com/enterprise/dev_license/devlicenseagreement/) and waiting for 30+ seconds i end up on this site https://dev.splunk.com/enterprise/dev_license/error that has this error "$Request failed with the following error code: 500" I've already sent this to devinfo@splunk.com but gotten no response.  Anyone else hitting this issue?
With a query like the following (I've simplified it a little here and renamed some fields) index="my-test-index" project="my-project" | eval _time = strptime(my_timestamp, "%Y-%m-%dT%H:%M:%S.%N+00:0... See more...
With a query like the following (I've simplified it a little here and renamed some fields) index="my-test-index" project="my-project" | eval _time = strptime(my_timestamp, "%Y-%m-%dT%H:%M:%S.%N+00:00") | stats latest(my_timestamp) latest(_time) latest(my_count) as my_count by project I see behaviour that surprised me: 1. If I repeatedly issue the query, the value of my_count varies 2. It appears the rows from which my_count is taken are always those without a _time value resulting from the eval in my query (because either `my_timestamp` did not match the strptime format, or that field was not present when the record was ingested into splunk -- my data has both cases) 3. In the output of the search, the value of my_timestamp returned does not always come from the same ingested record as my_count. 4. In fact, the value of my_timestamp in the search output is always taken from the same single record: it doesn't change when I repeatedly issue the query. I guess 1. and 2. are because "null" (or empty or some similar concept) _time values aren't really expected and happen to sort latest. I guess 3. is because function `latest` operates field-by-field, and is not selecting a whole row -- combined again with the fact that some _time values are null. 4. I don't understand, but perhaps is a coincidence and is not reliably true in general outside of my data set etc., I'm not sure. What I really want is to find the ingested record with the latest value of `my_timestamp` for a given `project`, so I can present fields like `my_count` by `project` in a "most recent counts" table. I don't really want to operate on individual fields' "latest" values as in the query above, but rather latest entire records. How can I best achieve that in splunk?
Hi, I have my messages like below msg: abc.com - [2023-11-24T18:38:26.541235976Z] "GET /products/?brand=ggg&market=ca&cid=5664&locale=en_CA&pageSize=300&ignoreInventory=false&includeMarketingFlags... See more...
Hi, I have my messages like below msg: abc.com - [2023-11-24T18:38:26.541235976Z] "GET /products/?brand=ggg&market=ca&cid=5664&locale=en_CA&pageSize=300&ignoreInventory=false&includeMarketingFlagsDetails=true&size=3%7C131%7C1%7C1914&trackingid=541820668241808 HTTP/1.1" 200 0 47936  "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36" "10.119.25.242:59364" "10.119.80.158:61038" x_forwarded_for:"108.172.104.40, 184.30.149.136, 10.119.155.154, 10.119.145.54,108.172.104.40,10.119.112.127, 10.119.25.242" x_forwarded_proto:"https" vcap_request_id:"faa6d72c-4518-4847-47b2-0b340bb27173" response_time:0.455132 gorouter_time:0.000153 app_id:"1ae5e787-31d1-4b6a-aa7a-1ff7daed2542" app_index:"41" instance_id:"5698b714-359f-4906-742e-2bd7" x_cf_routererror:"-" x_b3_traceid:"042db9308779903a607119a204239679" x_b3_spanid:"b6e3d71259e4c787" x_b3_parentspanid:"607119a204239679" b3:"1188a5551d8c70081e69521568459a30-1e69521568459a30" msg: abc.com - [2023-11-24T18:38:25.779609363Z] "GET /products/?brand=hhh&market=us&cid=1185233&locale=en_US&pageSize=300&ignoreInventory=false&includeMarketingFlagsDetails=true&department=136&trackingid=64354799847524800 HTTP/1.1" 200 0 349377 "Mozilla/5.0 (iPhone; CPU iPhone OS 16_6_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.6 Mobile/15E148 Safari/604.1" "10.119.25.155:53702" "10.119.80.152:61026" x_forwarded_for:"174.203.39.239, 23.64.120.177, 10.119.155.137, 10.119.145.11,174.203.39.239,10.119.112.37, 10.119.25.155" x_forwarded_proto:"https" vcap_request_id:"d1628805-0307-4bf7-7d8d-b1fa3a829986" response_time:1.211096 gorouter_time:0.000257 app_id:"1ae5e787-31d1-4b6a-aa7a-1ff7daed2542" app_index:"180" instance_id:"8faf9328-b05d-4618-7d12-96e6" x_cf_routererror:"-" x_b3_traceid:"06880ee3e5ad85b36dd3f4e64337a842" x_b3_spanid:"acb1620e517eebec" x_b3_parentspanid:"6dd3f4e64337a842" b3:"06880ee3e5ad85b36dd3f4e64337a842-6dd3f4e64337a842" msg: abc.com - [2023-11-24T18:38:26.916331792Z] "GET /products/?cid=1127944&department=75&market=us&locale=en_US&pageNumber=1&pageSize=60&trackingid=6936C9BF-D9DD-4D77-A14F-099C0400345D&brand=lll HTTP/1.1" 200 0 48615 "-" "browse" "10.119.25.172:51116" "10.119.80.139:61034" x_forwarded_for:"10.119.80.195, 10.119.25.172" x_forwarded_proto:"https" vcap_request_id:"a3125da7-a602-4e17-6656-909f380c12ed" response_time:0.068075 gorouter_time:0.000737 app_id:"1ae5e787-31d1-4b6a-aa7a-1ff7daed2542" app_index:"156" instance_id:"4f44c63e-44c6-4605-7466-fe5d" x_cf_routererror:"-" x_b3_traceid:"731b434ec32bb0eb6236fd4a8b8e1195" x_b3_spanid:"6236fd4a8b8e1195" x_b3_parentspanid:"-" b3:"731b434ec32bb0eb6236fd4a8b8e1195-6236fd4a8b8e1195" Iam trying to extract values brand market and cid from above url with below query  index = dd | rex field=_raw "brand=(?<brand>[^&]+)" | rex field=_raw "market=(?<market>[^&]+)" | rex field=_raw "cid=(?<cid>\d+)" | table brand, market, cid but I get the whole url after brand= getting extracted, not just brand market and cid values. Please help
Hi Team, I have recently install the AppDynamics platform admin on Linux Server and successfully install the controller through GUI. But I am not able to install Event Service. (Note :- I have Two ... See more...
Hi Team, I have recently install the AppDynamics platform admin on Linux Server and successfully install the controller through GUI. But I am not able to install Event Service. (Note :- I have Two Linux Server, One for Platform Admin & Controller. The Second Server for Event Service.) I have successfully add the Event Service Server Host in Hosts Tab through OPENSSH between Two Servers.While Installing Event Service I got the connection timeout error , unable to ping. So i tried changing the property values in your events-services-api-store.properties file to IP address instead of hostnames. Then Add the following environment variable to the new user. export INSTALL_BOOTSTRAP_MASTER_ES8=true After that I Restart the Event-Service(ES) manually using below command from event-service/processor directory. bin/events-service.sh stop -f && rm -r events-service-api-store.id && rm -r elasticsearch.id nohup bin/events-service.sh start -p conf/events-service-api-store.properties & After following above steps, I get below error in Enterprise Console, while starting the Event Service Please help me resolved this issue....