All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello dear All,   1* How to calculate average size of a syslog message for a particular source in GB using Splunk query? 2**  What easy formula to calculate EPS?  Thank you in advance
I have lookup with CIDR advanced field which contains:   id cidr_field 1 1.1.1.1/24 2 8.8.8.8/24     If I search for single if in the range, i.e:   | makeresults | eval ip="8.8.8... See more...
I have lookup with CIDR advanced field which contains:   id cidr_field 1 1.1.1.1/24 2 8.8.8.8/24     If I search for single if in the range, i.e:   | makeresults | eval ip="8.8.8.1" | lookup mylookup cidr_field as ip OUTPUT id     It's worked currently, But If I'm tried to search for CIDR it does not return any result:   | makeresults | eval ip="8.8.8.8/28" | lookup mylookup cidr_field as ip OUTPUT id     So how can I search for CIDR in another CIDR?
I have a CSV file for ingestion like this. This needs to be monitored via inputs. I dont want to use INDEXED_EXTRACTION= CSV here. Without this I am able to get the feed in successfully. But not able... See more...
I have a CSV file for ingestion like this. This needs to be monitored via inputs. I dont want to use INDEXED_EXTRACTION= CSV here. Without this I am able to get the feed in successfully. But not able to extract the fields I wanted File sample "NAME","AGE","GENDER" "John","32","MALE" "ROSE","23","FEMALE" #props [mysourcetype] FIELD_DELIMITER = , FIELD_NAMES="NAME","AGE","GENDER" HEADER_FIELD_LINE_NUMBER=1 HEADER_FIELD_DELIMITER = , FIELD_QUOTE = " HEADER_FIELD_QUOTE = " DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE=false NO_BINARY_CHECK=true No luck. Any ideas?
my search query checks for the last 15m for each 5min interval Sample query: index=XXXX sourcetype=XXX* env=XXX OR env=XXX "Continuation timed out" | bucket _time span=5m | timechart span=5m count ... See more...
my search query checks for the last 15m for each 5min interval Sample query: index=XXXX sourcetype=XXX* env=XXX OR env=XXX "Continuation timed out" | bucket _time span=5m | timechart span=5m count AS Devices | eval inc_severity=case('Device'>=450, "3") | eval support_group=case('Device'>=450, "XXXXX") | eval dedup_tag=case('Device'>=450, "XXXXXX") | eval corr_tag=case('Devices'>=450, "XXXXXX") | eval event_status=case('Device'>=450, "1") | eval service_condition=case('Device'>=450, "1") | table sev event dedup corr support_group service_condition _time Devices | sort 3 - Devices | sort _time | where isnotnull('inc_severity') | where 'Devices'>450 based on above query my output is as follows sev event dedup corr support_group service_condition _time Device 3 1 xxx xxx xxx 1 x 700 3 1 xxx xxx xxx 1 y 900 3 1 xxx xxx xxx 1 z 1000 but what i am trying to get the output as follows sev event dedup corr support_group service_condition. _time Device 3 1 xxx xxx xxx 1 x,y,z 700,900,1000  
Hi Team, I am getting 403 Forbidden error when i access the course. Please help me on this. Thanks, Rakesh K
Hi Guys, I have a question about the data model.   Eventually, I want to create complex correlation rules by finding mutual indications between different log sources.     In this case, the mutual in... See more...
Hi Guys, I have a question about the data model.   Eventually, I want to create complex correlation rules by finding mutual indications between different log sources.     In this case, the mutual indication can be a username.   I'm looking for two different ways to make this happen(there might be a third or fourth way, Maybe sub search or join):  Don't focus on use-case logic this is just an example:  Lets say that I have a base query which is: sourcetype="WinEventLog" EventCode=4625 ( it has Authentication failures for "korhan" in the user field. ) Now, I want to join an event from the data model.  From proxy logs, the data model has malware URLs for users access to. |from datamodel:"proxylog"."malwarelog"  (Query of data model:index=main sourcetype=syslog category=Malware |stats count by user uri category) When I run this data model query, it basically gives me:  user: korhan and count: 3, let say.  Now there are two events, Microsoft and Proxy logs. I want to say that if auth failure happens first and if the same user is also in the data model, I want to create an alarm.  When i tried to combine two queries together, did not able to find how to create a relation in user fields.  sourcetype="WinEventLog" EventCode=4625 |from datamodel:"proxylog"."malwarelog" | fields user "Where" is not working for the data model. (It works for lookup table). Do you have any idea?  you can recommend me anything else instead of the data model.  The data model seemed to me more effective rather than join queries.  Thanks for the help! I found this: https://community.splunk.com/t5/Knowledge-Management/How-do-you-write-a-correlation-search-with-a-data-model/m-p/310459#M2705 but did not work. It returns 0 info.  Korhan
Good day! The databases are duplicated every time, and when using the "Rising Column" errors, various errors are displayed. Highly I ask for help, I can not solve it either as a problem with the dat... See more...
Good day! The databases are duplicated every time, and when using the "Rising Column" errors, various errors are displayed. Highly I ask for help, I can not solve it either as a problem with the database. At the same time, half of the requests work if you work through a thick client.
Hello I would like to pass a value from a joined search (e.g. in this case the "Side") to the final table. I tried different append approaches with no success. Also I believe the performances of th... See more...
Hello I would like to pass a value from a joined search (e.g. in this case the "Side") to the final table. I tried different append approaches with no success. Also I believe the performances of the below query could potentially be enhanced. It works, but maybe the use of transaction is not perfect. cs_stage=PROD cs_component_id=TOU TOFF_MARGIN_CALCULATOR | rex field=_raw "channel name: (?<reqid>.*)," | transaction reqid | join reqid [search cs_stage=PROD cs_component_id=TOU rest.ValidateTradingOrderRestAdaptor.validateTradingOrder | rex field=_raw "<transactionType>(?<Side>.*)<\/transactionType>"] | rex field=_raw "inflight_order_exposure: (?<InflightOrderExposure>\d*\D*\d*)" | rex field=_raw "open_orders_exposure: (?<OpenOrdersExposure>\d*\D*\d*)" | rex field=_raw "positions_exposure: (?<PositionExposure>\d*\D*\d*)" | rex field=_raw "total_potential_exposure: (?<TotalPotentialExposure>\d*\D*\d*)" | rex field=_raw "limit: (?<Limit>\d*\D*\d*\D*\d*)" | rex field=_raw "limit_type_value: (?<LimitTypeValue>\S*)" | rex field=_raw "available_limit: (?<AvailableLimit>\d*\D*\d*\D*\d*)\s*," | rex field=_raw "cif_=(?<CIF>.*[0-9]),memoizedIsInitialized" | rex field=_raw "csfid_=(?<csfiid>.*),shortSale_" | table reqid _time CIF Side csfiid InflightOrderExposure OpenOrdersExposure PositionExposure TotalPotentialExposure Limit LimitTypeValue AvailableLimit duration
I was wondering... how are foreach-generated searches treated regarding the searches limits? I mean - normally you have your maximum number of concurrent searches set in your limits.conf - it can af... See more...
I was wondering... how are foreach-generated searches treated regarding the searches limits? I mean - normally you have your maximum number of concurrent searches set in your limits.conf - it can affect how/when/where your searches will be scheduled to run and can generate alerts in case of too many delayed searches. Fair enough. But how are subsearches spawned from foreach command counted against the limit? If I do a foreach over - let's say - 50 fields, will it consume 50 searches? Will they be all run in parallel or will they be sequenced somehow? Any good doc describing this?
Hi, I have some data which spans multiple systems example below: "system" "app" "fld1" "fld2" "fld3" sys1         appA   1           0          0 sys1         appA   0           0         0 sy... See more...
Hi, I have some data which spans multiple systems example below: "system" "app" "fld1" "fld2" "fld3" sys1         appA   1           0          0 sys1         appA   0           0         0 sys1        appB    0          1 What I'm trying to do is create a generic dashboard so I would need to rename the fields based on the "app" value. So something similar to: when app=="appA" rename "fld1" as "appAfld1",  rename "fld2" as "appAfld2" when app=="appB" rename "fld1" as "appBfld1" Then in a table only show the renamed fields, so a conditional table statement again based on the "app" value. Any ideas on how/if that can be achieved?  Alternately I just create separate dashboards but a lot of repetition in that so I suspect there is a way to do it. Thanks in advance for any ideas.
Hello, Events for simple query index=os sourcetype=cpu are not breaking for users without admin role. All other user without admin role For user with admin role What could be the reason? ... See more...
Hello, Events for simple query index=os sourcetype=cpu are not breaking for users without admin role. All other user without admin role For user with admin role What could be the reason? Any suggestions please  
Hi Splunkers, I am getting below error while configuring the SSL certificate among the splunk hosts. ERROR DistributedTracer [2406 MainThread] - Couldn't find "distributed_tracer" in server.conf. ... See more...
Hi Splunkers, I am getting below error while configuring the SSL certificate among the splunk hosts. ERROR DistributedTracer [2406 MainThread] - Couldn't find "distributed_tracer" in server.conf. Can you help me on this?    
I have a json like this:   { "A": [ { "B": [ { "status": "2", "value": "1" }, { "status": "1", "value": "2" }, ... See more...
I have a json like this:   { "A": [ { "B": [ { "status": "2", "value": "1" }, { "status": "1", "value": "2" }, { "status": "3", "value": "4" }, { "status": "5", "value": "8" } ] } ] }   I want to extract the field  value. I tried doing   spath input=field_name output=value path=A{0}.B{}.value   but it's not working Pls help
hi   I want to display an average line in my bar chart So I am doing this but instad a line it's a third bar chart which is displayed how to add a line average instead a bar line   | timechart ... See more...
hi   I want to display an average line in my bar chart So I am doing this but instad a line it's a third bar chart which is displayed how to add a line average instead a bar line   | timechart span=2h max(debut) as "Début de session", max(fin) as "Fin de session" | eventstats avg("Début de session") as Average | eval Average=round(Average,0)  Thanks
Why does Splunk Cloud have search head as standalone and not in a search head cluster ? How does Splunk engg team manages maintenance and upgrade task in this scenario ?
I am connected inside a WAN network that has WiFi APN, LAN cables and all work in same network. Network has router and then just systems, no external firewall etc that can give direct logs of data fl... See more...
I am connected inside a WAN network that has WiFi APN, LAN cables and all work in same network. Network has router and then just systems, no external firewall etc that can give direct logs of data flow in the network. From where I can get logs to work in splunk so that I can continuously monitor in real time? to visualize things like internet speed, usage of data per host etc.
Hi Team,   i am doing set of poc to expolre splunk features, while doing so i am able to send data to splunk observability using open telemetry (OTEL) and just want to know using OTEL can we send d... See more...
Hi Team,   i am doing set of poc to expolre splunk features, while doing so i am able to send data to splunk observability using open telemetry (OTEL) and just want to know using OTEL can we send data to Enterprise splunk.   If yes can you please guide me how..??    
Hi Splunk Gurus Im hoping that there is a simple answer for this issue. We have recently upgraded to Splunk Enterprise 8.2. Our servers (RHEL 7/8) are all running Universal Forwarders 8.0. The is... See more...
Hi Splunk Gurus Im hoping that there is a simple answer for this issue. We have recently upgraded to Splunk Enterprise 8.2. Our servers (RHEL 7/8) are all running Universal Forwarders 8.0. The issue we have found is that the UF does not include the Python 2.7/3.7 binaries and libs as part of its install package (yes I know this has not been the case for a long time). This is not an issue if you are installing the forwarder on a Splunk Node as the Enterprise version includes these and installs them (as far as I can tell) into the correct locations in the forwarder for it to use internally. The problem appears when trying to upgrade the standalone linux package (.tgz or .rpm) to 8.2.2.1 as the binary and packages for python3.7 are required (regardless of python.version setting)  to run the migration upgrade scripts As RHEL7/8 only has a supported package for Python 3.6 this becomes an even more pressing issue. I have installed Python 3.7 from source to try as a workaround and linked it to /opt/splunkforwarder/bin/python3.7 with some success. The main problem seems to be that the site-packages path seems to be hard coded into the forwarder to look for packages in the /opt/splunkforwarder/lib/python3.7/site-packages regardless of the python lib path locations. eg if I symlink /usr/local/bin/python3.7  -> /opt/splunk/forwarder/bin/python3.7 I get these kinds of errors in the splunkd.log /opt/splunkforwarder/bin/python3.7: can't open file '/opt/splunkforwarder/lib/python3.7/site-packages/splunk/clilib/cli.py': [Errno 2] No such file or directory As the splunk cmd which runs python scripts from apps cannot even start correctly regardless of the python.version value set in the app or server.conf So my actual question is how do we get the python 2.7 & 3.7 binaries and associated required packages into a forwarder? Is there a .tgz or .rpm that we can use to get the internal python versions the forwarder requires installed in the right locations? Or a full forwarder .rpm that includes the binaries for exactly this standalone purpose? This would seem to be a significant oversight that assumes Splunk Enterpise will always be available to use as a base installer for all servers, and additionally that python 3.7 is always available/easily installed. A much less desirable option would be to roll back the forwarders (and all deployed apps to the latest 7.x version) but this limits moving forward and will vreate many more compatibility issues than it will solve Any helpful hints pointers or advice would be greatly appreciated Regards Kieren
Hi all, Currently have setup multiple Splunk servers configured in outputs.conf for the universal forwarders but I am wondering if there is a way to specify only index to the second server if the fi... See more...
Hi all, Currently have setup multiple Splunk servers configured in outputs.conf for the universal forwarders but I am wondering if there is a way to specify only index to the second server if the first server becomes unreachable.   Thanks,
Hello,  I am streaming a list of data with the most recent timestamp, but the data is getting displayed in a different time.  For example: t=1632967410.582567 devicename=abc Ethernet.dst=### Ethern... See more...
Hello,  I am streaming a list of data with the most recent timestamp, but the data is getting displayed in a different time.  For example: t=1632967410.582567 devicename=abc Ethernet.dst=### Ethernet.src=### Ethernet.type=65535 t=1632967410.582567 devicename=abc Ethernet.dst=### Ethernet.src=### Ethernet.type=65535 t=1632967410.582567 devicename=abc Ethernet.dst=### Ethernet.src=### Ethernet.type=65535 The Epoch conversion of the above timestamp (t=1632967410.582567 ) is 7:03:30.582 PM but the data on the dashboard is displayed at time 5:19:01.000 PM  Background: * Data is generated from a python script, the data is a list of events, and each event is printed to stdout * I have tried to include additional line breaks between each event, but it still streams it as a single chunk and displays it in a different timestamp * The version of SUF is 8.2.1 (build - ddff1c41e5cf)  * The version of Splunk Enterprise is 8.1.2 Can someone guide me on fixing this to print the streamed data in the correct timestamp?  Thank you.