All Topics

Top

All Topics

Hello everyone, I got such table after search   ip subnets 10.0.0.2 10.0.0.0/24   10.0.0.3 10.0.0.0/24 172.24.23.23/24   I want to compare if ip be... See more...
Hello everyone, I got such table after search   ip subnets 10.0.0.2 10.0.0.0/24   10.0.0.3 10.0.0.0/24 172.24.23.23/24   I want to compare if ip belongs to subnets, using next one comparison | eval match=if(cidrmatch(subnets, ip), "match", "nomatch") It works correct if there is one subnet, but if more - not, how can I correct my search query?
Hi Team, we have some questions on uploading data. 1) Can we upload sample json/csv data with CIM compatible, can we see any demo ? 2) how to ingest network / network-traffic related sample dat... See more...
Hi Team, we have some questions on uploading data. 1) Can we upload sample json/csv data with CIM compatible, can we see any demo ? 2) how to ingest network / network-traffic related sample data on splunk enterprise ? 3) Similarly looking for some more sample data related to email or mac-addr etc on splunk enterprise (trial account). Regards Anand  
Hello Everyone, Our requirement is to fetch/download the Service health score via rest API. we are in splunk cloud as of now. Thank you  
Hi everyone, I want to deploy standard inputs for ca. 50 linux UFs via custom apps. Since there is a difference between standard log paths on Debian and RedHat flavored systems I want to know if th... See more...
Hi everyone, I want to deploy standard inputs for ca. 50 linux UFs via custom apps. Since there is a difference between standard log paths on Debian and RedHat flavored systems I want to know if there is a possibility to differentiate these systems on side of the Forwarder Management e.g: - Server Class DEB-based -> App A - Server Class RPM-based -> App B I'm aware of the fact, that I can tell apart windows and linux machine types, but is it possible to get the latter in more detail? I wouldn't like to use Ansible to deploy all the Inputs.conf files because I think this will be a mess when updates to the configs are pending. Thank you
Hello, since some time ago we are experiencing high CPU and/or memory usage related to the splunk_ta_o365 input addon. The processes impacted are the python ones: %CPU PID USER COMMAND 98.9 31... See more...
Hello, since some time ago we are experiencing high CPU and/or memory usage related to the splunk_ta_o365 input addon. The processes impacted are the python ones: %CPU PID USER COMMAND 98.9 317938 splunk /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365_graph_api.py 97.7 317203 splunk /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365_graph_api.py 8.8 3058237 splunk splunkd -p 8089 restart   Updating didn't resolved the issue Is anyone else experiencing this? How can we manage it?   Thanks!!
Hello, I have an deployment app that monitor log file from an external server that work fine since last year. But suddenly, since 26/1/2023 untill now, it can't index anything. Nothing changed from... See more...
Hello, I have an deployment app that monitor log file from an external server that work fine since last year. But suddenly, since 26/1/2023 untill now, it can't index anything. Nothing changed from the server side or on my side either, the host still produce log file on a daily basis. I also request to check the connection and restart deployment client but no improvement. My input.config is: [monitor:///u01/pv/log-1/data/trafficmanager/enriched/access/*.log] disabled = 0 index = my index sourcetype = my sourcetype The example log file name is: access_worker_6_2023_01_26.log  I like to resolve this problem, even redo every step if I have to because this is urgent. And I like to know how to troubleshoot step by step to know where is the problem, and how to prevent this in the future.  
Hi Splunkers, I was wondering if there is a way to output the contents of a Lookup file but also show the Lookup file name as results so for example | inputlookup append=t <filename1>.csv | input... See more...
Hi Splunkers, I was wondering if there is a way to output the contents of a Lookup file but also show the Lookup file name as results so for example | inputlookup append=t <filename1>.csv | inputlookup append=t <filename2>.csv | inputlookup append=t <filename3>.csv   Running the above will show the contents but would like to know which file the contents relates to. Thanks
Hi All,   I am trying our Splunk Enterprise server for MySQL database monitoring. I am using a trial version of Splunk. I have installed the Splunk DB Connect application and have installed the... See more...
Hi All,   I am trying our Splunk Enterprise server for MySQL database monitoring. I am using a trial version of Splunk. I have installed the Splunk DB Connect application and have installed the MySQL driver mysql-connector-j-8.0.31.jar. I am able to see that the driver is installed. And I have configured a MySQL DB connection. But when I execute Health Check from the monitoring console for the Splunk DB Connect app, I am getting the error " One or more defined connections require the corresponding JDBC driver." "Driver version is invalid, connection: MySQLDB, connection_type: mysql." MySQL DB version is 8.0.23 and it's an AWS Aurora RDS instance. I did try "Splunk DBX Add-on for MySQL JDBC" to install the driver before and I had the same issue. I saw that the driver that came with this app was of a lower version than what we use in our application to connect to the database, so installed the new driver directly. Also while trying to execute a query I am getting the error "Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1." Please help to resolve this error. Thanks Arun    
splunkd service is trying to start after windows server reboot but it stops suddenly. MS has confirmed that there is no issue from Service Control Manager.  This Splunk service stoppage is found ... See more...
splunkd service is trying to start after windows server reboot but it stops suddenly. MS has confirmed that there is no issue from Service Control Manager.  This Splunk service stoppage is found only on the servers after the patching reboot.  
I'm trying to parse saved searches that contain a bunch of eval statements that do this sort of logic   | eval var=case( a,b, c,d, e,f) | eval var2=case( match(x, "z|y|z"), 1, match(x, ... See more...
I'm trying to parse saved searches that contain a bunch of eval statements that do this sort of logic   | eval var=case( a,b, c,d, e,f) | eval var2=case( match(x, "z|y|z"), 1, match(x, "a|b|c"), 2) | eval...    I have the search string from the rest api response and am trying to extract all the LHS=RHS statements with   | rex field=search max_match=0 "(?s)\|\s*eval (?<field>\w+)=(?<data>.*?)"   The captures all the fields in <field> nicely, i.e. var and var2 (in this example because of the non-greedy ?), but I am struggling with capturing <data> in that the data is multi-line and if I don't use non-greedy(?) then I only get ONE field returned and data is the remainder of the search string, i.e. greedy (.*) I can't use [^|]* (effectively much the same) as the eval statements may contain pipes | so I want to extract up to the next \n|\s?eval I've been banging around with regex101 but just can't figure out the syntax to get this to work. Any ideas?
I have a certain amount of events (generated every 5 min) for a set of websites and their user base and their country.  The goal is to find the number of distinct users per hour/day/month for each w... See more...
I have a certain amount of events (generated every 5 min) for a set of websites and their user base and their country.  The goal is to find the number of distinct users per hour/day/month for each website per country during the last 6 months. So at the end it will look something like this: Over the last 6 months: Country1 - Website1 -  12 users/hour (or day, month) Country1 - Website2  -  2 users/hour (or day, month) Country3 - Website1 -  10 users/hour (or day, month) Country2 - Website3  -  8 users/hour (or day, month) And what would be the most appropriate chart to visualize the outcome?   I have come up with this line but i'm not sure if it gives out what i want (the hourly average)   index... | chart count(user) as no_users by location website span=1h
Hi experts there, Trying to extract multivalue output from a multiline json field through props and transforms. How best can I achieve for the below sample data (for my_mvdata field) ? I can writ... See more...
Hi experts there, Trying to extract multivalue output from a multiline json field through props and transforms. How best can I achieve for the below sample data (for my_mvdata field) ? I can write a regex in pros.conf with \\t delimiter. But only getting the first line. How to use multi add and do it through transforms?            { something: false somethingelse: true blah: blah: my_mvdata: server1 count1 country1 code1 message1 server2 count1 country1 code1 message2 server3 count1 country1 code1 message3 server4 count1 country1 code1 message4 blah: blah: }            
My search:     | makeresults earliest=-2h | timechart count as aantal span=1m     returns a list of zero's but for the last/current minute it returns "1". I only want zero's back to combi... See more...
My search:     | makeresults earliest=-2h | timechart count as aantal span=1m     returns a list of zero's but for the last/current minute it returns "1". I only want zero's back to combine this search with a timechart. After combining these searches (makeresults and timechart) there should be no message "no values found" anymore. What do I have to change to have only zero's as a result of my makeresults search?
the first parameter is expectedBw and the other one is observedBw the expectedBw remains constant we have to show by line graph that how the expectedBw is being achieved with respect to observedBw. T... See more...
the first parameter is expectedBw and the other one is observedBw the expectedBw remains constant we have to show by line graph that how the expectedBw is being achieved with respect to observedBw. There need to be 2 lines one for expectedBw one for observedBw. the x axis should have time and y axis should have bandwidth means at different time how is the observedBw going forward or behind the expectedBw
Hi, I am using the REST API to pull data from splunk, using the output_mode=json. The data that is returned is a mix of strings and JSON (objects) and I am trying to work out a way for the API ... See more...
Hi, I am using the REST API to pull data from splunk, using the output_mode=json. The data that is returned is a mix of strings and JSON (objects) and I am trying to work out a way for the API to return the entire data set as JSON. For Example: Curl Command: curl -k -u 'user1'' https://splunk-server:8089/servicesNS/admin/search/search/jobs/export -d 'preview=false' -d 'output_mode=json' -d 'search=|savedsearch syslog_stats latest="-2d@d" earliest="-3d@d" span=1' | jq . Results: Note how the result is in JSON, but devices is an array of strings not json. {   "preview": false,   "offset": 0,   "lastrow": true,   "result": {     "MsgType": "LINK-3-UPDOWN",     "devices": [       "{\"device\":\"1.1.1.1\",\"events\":12,\"deviceId\":null}",       "{\"device\":\"2.2.2.2\",\"events\":128,\"deviceId\":1}",       "{\"device\":\"3.3.3.3\",\"events\":217,\"deviceId\":2}"     ],     "total": "357",   } } Query: | tstats count as events where index=X-syslog Severity<=4 earliest=-3d@d latest=-2d@d by _time, Severity, MsgType Device span=1d | search MsgType="LINK-3-UPDOWN" | eval devices=json_object("device", Device, "events", events, "deviceId", deviceId ) | fields - Device events _time Filter UUID Regex deviceId addressDeviceId | table MsgType devices Query Result in UI: MsgType devices total LINK-3-UPDOWN {"device":"1.1.1.1","events":12,"deviceId":null} {"device":"2.2.2.2","events":128,"deviceId":null} {"device":"3.3.3.3","events":217,"deviceId":null} 357   As can be seen from the UI the device is in JSON format (using json_object), but from the curl result it is a string in json format - is there a way for the query to return the whole result as a json object, not a mix of json and strings ? I have also tried tojson in a number of differnt ways, but no success. Desired Result: where devices is a json object and not treated a string as above. {   "preview": false,   "offset": 0,   "lastrow": true,   "result": {     "MsgType": "LINK-3-UPDOWN",     "devices": [       {"device":"1.1.1.1","events":12,"deviceId":null}",       {"device":"2.2.2.2","events":128,"deviceId":1}",       {"device":"3.3.3.3","events":217,"deviceId":2}"     ],     "total": "357",   } } I can post process the strings into JSON, but I would rather get JSON from SPlunk directly. Thanks !        
Today : index=sold Product=Acer , Product=iphone  last week : index=sold  Product=Samsung , Product=iphone Query Used : index=sold earliest=-0d@d latest=now |stats count as Today by Product | ... See more...
Today : index=sold Product=Acer , Product=iphone  last week : index=sold  Product=Samsung , Product=iphone Query Used : index=sold earliest=-0d@d latest=now |stats count as Today by Product | appendcols [search index=sold earliest=-7d@d latest=-6d@d |stats count as Lastweeksameday by Product] --- As Samsung Product is not sold on "Today" that particular Product is not showing up in the Output though it was sold on "Last week". Ideally it should show as 0 for "Today" and as 1 for "Last week" in the output. Could someone please help.
Does anyone know how to make use of a 2nd y axis on a line graph or column chart or where it might be documented?
Greetings, I'm running Splunk Enterprise on a Windows Server (requirement driven). The Windows Server & Splunk have FIPS Mode Enabled (another requirement).  The Splunk Process (splunkd.exe) is c... See more...
Greetings, I'm running Splunk Enterprise on a Windows Server (requirement driven). The Windows Server & Splunk have FIPS Mode Enabled (another requirement).  The Splunk Process (splunkd.exe) is causing the windows server to generate an excessive number of 6417 events (The FIPS mode crypto selftests succeeded) in the local Windows Security Log, creating excessive noise in the logs (4,500/hr) and eating up HDD space. Any indication why and/or steps I can take to limit beyond turning off FIPS?
Please need help with this command - Average response time with 10% additional buffer ( single number) – Use “Eval” option
In my query. I am trying to combine the output from one index and sourcetype with the output of another index and sourcetype. I have looked at the documentation and came across subsearches and have a... See more...
In my query. I am trying to combine the output from one index and sourcetype with the output of another index and sourcetype. I have looked at the documentation and came across subsearches and have attempted to use the search command but not getting any results. Leaving me to believe I definitely am doing it wrong. Please see my example below.  index=A sourcetype=cat ProjectOwner="person" dest_owner="person" [search sourcetype=FW destp=1111 action=denied | table host] | srcdns srcip