All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to create a dashboard with Dynamic Dropdowns using the new JSON of Dashboard Studio. I'm not great at the XML of the Classic Dashboards, but there are a good bit of videos/sites that help... See more...
I am trying to create a dashboard with Dynamic Dropdowns using the new JSON of Dashboard Studio. I'm not great at the XML of the Classic Dashboards, but there are a good bit of videos/sites that help show how to do things and why. Dashboard Studio appears new enough that it doesn't have much for a rookie like me. I'd like something like https://www.youtube.com/watch?v=BJm04grvvf8 but for Dashboard Studio. Anyone know of anything? I am coming up with nothing. I just have some documentation that I'm not great at reading and understanding at https://docs.splunk.com/Documentation/SplunkCloud/latest/DashStudio/inputs found from https://community.splunk.com/t5/Dashboards-Visualizations/Dashboard-Studio-Dynamic-loading-a-dropdown-list/m-p/556552#M38690.
Hello, I have a CSV file in this form :   2021-08-30 15:45:32;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;CONNEXION;; 2021-08-30 15:45:24;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;STATUS;;BDD 2021-08-30 15:... See more...
Hello, I have a CSV file in this form :   2021-08-30 15:45:32;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;CONNEXION;; 2021-08-30 15:45:24;MOZILLA;j.dupont;FR6741557ERF;1.1.1.1;STATUS;;BDD 2021-08-30 15:45:16;MOZILLA;j.dupontFR6741557ERF;1.1.1.1;START;App_start;WEB   Corresponding to these 8 fields : date,application,user,host,ip,type,detail,module I have 2 questions : How can I extract these fields ? How can I extract field at search-time (to be able to be retroactive on old logs) ? This my actuals props.conf and transforms.conf deployed on Search Head + Indexers and the inputs.conf file on the Universal Forwarder : props.conf   [csvlogs] disabled = false TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19 LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false KV_MODE = none REPORT-fieldsextraction = logs_fields   transforms.conf   [logs_fields] DELIMS = ";" FIELDS = date,application,user,hostname,ip,type,detail,module KEEP_EMPTY_VALS = true   inputs.conf   [Monitor://D:\repository\logs.csv] disabled = false sourcetype=csvlogs index=logs_index1   Do you have solutions ?
Has anyone configures Splunk to collect logs from Cloud.gov? Please share how it is done so. Thanks a million.
I'm new to Splunk and I'm trying to do something that is probably basic but I haven't been able to figure out how to do it.   I have a log in Splunk which contains an http_query along the lines of:... See more...
I'm new to Splunk and I'm trying to do something that is probably basic but I haven't been able to figure out how to do it.   I have a log in Splunk which contains an http_query along the lines of: ``` my_object[prop1]=someVal&my_object[prop2]=someOtherVal ``` I'm trying to use a timechart to inspect these values. I've tried: `timechart count by my_object[prop1]` which tells me prop1 is undefined. Then also tried `timechart count by my_object.prop1`  which gives me a time series with NULL everywhere. How can I do this?
I have the following query and I am using it in a dashboard to show the errors categorized.  index=myindex sourcetype=mysource_type:app | spath message | regex message="^.*error creating account.*$$... See more...
I have the following query and I am using it in a dashboard to show the errors categorized.  index=myindex sourcetype=mysource_type:app | spath message | regex message="^.*error creating account.*$$"|top message Now, this is working, but it is showing the complete messages. The error messages have the following format most of the time: message: Log: "error creating account {\"status\":\"error\",\"message\":\"Error while creating account, 500 - Internal Server Error\"}" Now when the stats table is displayed. I would like to show only the message part from this error message, that is it only needs to show Error while creating an account, 500 - Internal Server Error.  It will be very much helpful someone can point out how I can do this?
Hello, i've put two timecharts on top of each other to compare their events by time. Both timecharts are using the same time range and span. The top timechart has many data points  whereas the bo... See more...
Hello, i've put two timecharts on top of each other to compare their events by time. Both timecharts are using the same time range and span. The top timechart has many data points  whereas the bottom has just a few. How can I show the same time range on the x-axis in both timecharts?  
When we create new alerts for testing, we have the correlation search create the notable event with a status of "Testing". This way, any alerts that fire go into Incident Review with a status of "Tes... See more...
When we create new alerts for testing, we have the correlation search create the notable event with a status of "Testing". This way, any alerts that fire go into Incident Review with a status of "Testing". The problem is, when we are ready to move them out of testing, we change the notable event configuration to have a status of "New". But when we change that configuration, it changes all of the old notable events that fired with a "Testing" status to "New" which throws off metrics because suddenly there's an influx of notable events that show up as being "New" even though they were previously in status "Testing." Is there a way to change the default status for notable events and have it NOT change the old ones that previously fired with the old default status?
Need help with a regex to  show me if the input field has spaces at the leading or ending of the string OR if it contains " <doublequotes> anywhere in the string. <change> <eval token="validation... See more...
Need help with a regex to  show me if the input field has spaces at the leading or ending of the string OR if it contains " <doublequotes> anywhere in the string. <change> <eval token="validationResult" >if(match(value, "[^\"a-zA-Z0-9]"), "padded space or doublequote identified", "All Good"</eval> </change>
I noticed in the hardening standards it states,  "Disable automatic chart recovery in the analytics workspace. See Charts in the Splunk Analytics Workspace in the Splunk Analytics Workspace Using th... See more...
I noticed in the hardening standards it states,  "Disable automatic chart recovery in the analytics workspace. See Charts in the Splunk Analytics Workspace in the Splunk Analytics Workspace Using the Splunk Analytics Workspace manual." I looked at the link, but did not find any explanation on what exactly risk it poses to keep that feature enabled. Hence, seeking some clarification.
Hello dears, How can i sort these field values ? Field = "port" 0/1/0/2/ 0/8/0/7/ 0/2/0/3/ 0/5/0/2/ 0/6/0/3/ 0/16/0/2 0/18/0/6 0/16/0/5 0/4/0/2/ 0/6/0/2/ 0/18/0/2 0/12/0/4 0/3/0/7/   ... See more...
Hello dears, How can i sort these field values ? Field = "port" 0/1/0/2/ 0/8/0/7/ 0/2/0/3/ 0/5/0/2/ 0/6/0/3/ 0/16/0/2 0/18/0/6 0/16/0/5 0/4/0/2/ 0/6/0/2/ 0/18/0/2 0/12/0/4 0/3/0/7/   Regards.
Hello dear All,   1* How to calculate average size of a syslog message for a particular source in GB using Splunk query? 2**  What easy formula to calculate EPS?  Thank you in advance
I have lookup with CIDR advanced field which contains:   id cidr_field 1 1.1.1.1/24 2 8.8.8.8/24     If I search for single if in the range, i.e:   | makeresults | eval ip="8.8.8... See more...
I have lookup with CIDR advanced field which contains:   id cidr_field 1 1.1.1.1/24 2 8.8.8.8/24     If I search for single if in the range, i.e:   | makeresults | eval ip="8.8.8.1" | lookup mylookup cidr_field as ip OUTPUT id     It's worked currently, But If I'm tried to search for CIDR it does not return any result:   | makeresults | eval ip="8.8.8.8/28" | lookup mylookup cidr_field as ip OUTPUT id     So how can I search for CIDR in another CIDR?
I have a CSV file for ingestion like this. This needs to be monitored via inputs. I dont want to use INDEXED_EXTRACTION= CSV here. Without this I am able to get the feed in successfully. But not able... See more...
I have a CSV file for ingestion like this. This needs to be monitored via inputs. I dont want to use INDEXED_EXTRACTION= CSV here. Without this I am able to get the feed in successfully. But not able to extract the fields I wanted File sample "NAME","AGE","GENDER" "John","32","MALE" "ROSE","23","FEMALE" #props [mysourcetype] FIELD_DELIMITER = , FIELD_NAMES="NAME","AGE","GENDER" HEADER_FIELD_LINE_NUMBER=1 HEADER_FIELD_DELIMITER = , FIELD_QUOTE = " HEADER_FIELD_QUOTE = " DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE=false NO_BINARY_CHECK=true No luck. Any ideas?
my search query checks for the last 15m for each 5min interval Sample query: index=XXXX sourcetype=XXX* env=XXX OR env=XXX "Continuation timed out" | bucket _time span=5m | timechart span=5m count ... See more...
my search query checks for the last 15m for each 5min interval Sample query: index=XXXX sourcetype=XXX* env=XXX OR env=XXX "Continuation timed out" | bucket _time span=5m | timechart span=5m count AS Devices | eval inc_severity=case('Device'>=450, "3") | eval support_group=case('Device'>=450, "XXXXX") | eval dedup_tag=case('Device'>=450, "XXXXXX") | eval corr_tag=case('Devices'>=450, "XXXXXX") | eval event_status=case('Device'>=450, "1") | eval service_condition=case('Device'>=450, "1") | table sev event dedup corr support_group service_condition _time Devices | sort 3 - Devices | sort _time | where isnotnull('inc_severity') | where 'Devices'>450 based on above query my output is as follows sev event dedup corr support_group service_condition _time Device 3 1 xxx xxx xxx 1 x 700 3 1 xxx xxx xxx 1 y 900 3 1 xxx xxx xxx 1 z 1000 but what i am trying to get the output as follows sev event dedup corr support_group service_condition. _time Device 3 1 xxx xxx xxx 1 x,y,z 700,900,1000  
Hi Team, I am getting 403 Forbidden error when i access the course. Please help me on this. Thanks, Rakesh K
Hi Guys, I have a question about the data model.   Eventually, I want to create complex correlation rules by finding mutual indications between different log sources.     In this case, the mutual in... See more...
Hi Guys, I have a question about the data model.   Eventually, I want to create complex correlation rules by finding mutual indications between different log sources.     In this case, the mutual indication can be a username.   I'm looking for two different ways to make this happen(there might be a third or fourth way, Maybe sub search or join):  Don't focus on use-case logic this is just an example:  Lets say that I have a base query which is: sourcetype="WinEventLog" EventCode=4625 ( it has Authentication failures for "korhan" in the user field. ) Now, I want to join an event from the data model.  From proxy logs, the data model has malware URLs for users access to. |from datamodel:"proxylog"."malwarelog"  (Query of data model:index=main sourcetype=syslog category=Malware |stats count by user uri category) When I run this data model query, it basically gives me:  user: korhan and count: 3, let say.  Now there are two events, Microsoft and Proxy logs. I want to say that if auth failure happens first and if the same user is also in the data model, I want to create an alarm.  When i tried to combine two queries together, did not able to find how to create a relation in user fields.  sourcetype="WinEventLog" EventCode=4625 |from datamodel:"proxylog"."malwarelog" | fields user "Where" is not working for the data model. (It works for lookup table). Do you have any idea?  you can recommend me anything else instead of the data model.  The data model seemed to me more effective rather than join queries.  Thanks for the help! I found this: https://community.splunk.com/t5/Knowledge-Management/How-do-you-write-a-correlation-search-with-a-data-model/m-p/310459#M2705 but did not work. It returns 0 info.  Korhan
Good day! The databases are duplicated every time, and when using the "Rising Column" errors, various errors are displayed. Highly I ask for help, I can not solve it either as a problem with the dat... See more...
Good day! The databases are duplicated every time, and when using the "Rising Column" errors, various errors are displayed. Highly I ask for help, I can not solve it either as a problem with the database. At the same time, half of the requests work if you work through a thick client.
Hello I would like to pass a value from a joined search (e.g. in this case the "Side") to the final table. I tried different append approaches with no success. Also I believe the performances of th... See more...
Hello I would like to pass a value from a joined search (e.g. in this case the "Side") to the final table. I tried different append approaches with no success. Also I believe the performances of the below query could potentially be enhanced. It works, but maybe the use of transaction is not perfect. cs_stage=PROD cs_component_id=TOU TOFF_MARGIN_CALCULATOR | rex field=_raw "channel name: (?<reqid>.*)," | transaction reqid | join reqid [search cs_stage=PROD cs_component_id=TOU rest.ValidateTradingOrderRestAdaptor.validateTradingOrder | rex field=_raw "<transactionType>(?<Side>.*)<\/transactionType>"] | rex field=_raw "inflight_order_exposure: (?<InflightOrderExposure>\d*\D*\d*)" | rex field=_raw "open_orders_exposure: (?<OpenOrdersExposure>\d*\D*\d*)" | rex field=_raw "positions_exposure: (?<PositionExposure>\d*\D*\d*)" | rex field=_raw "total_potential_exposure: (?<TotalPotentialExposure>\d*\D*\d*)" | rex field=_raw "limit: (?<Limit>\d*\D*\d*\D*\d*)" | rex field=_raw "limit_type_value: (?<LimitTypeValue>\S*)" | rex field=_raw "available_limit: (?<AvailableLimit>\d*\D*\d*\D*\d*)\s*," | rex field=_raw "cif_=(?<CIF>.*[0-9]),memoizedIsInitialized" | rex field=_raw "csfid_=(?<csfiid>.*),shortSale_" | table reqid _time CIF Side csfiid InflightOrderExposure OpenOrdersExposure PositionExposure TotalPotentialExposure Limit LimitTypeValue AvailableLimit duration
I was wondering... how are foreach-generated searches treated regarding the searches limits? I mean - normally you have your maximum number of concurrent searches set in your limits.conf - it can af... See more...
I was wondering... how are foreach-generated searches treated regarding the searches limits? I mean - normally you have your maximum number of concurrent searches set in your limits.conf - it can affect how/when/where your searches will be scheduled to run and can generate alerts in case of too many delayed searches. Fair enough. But how are subsearches spawned from foreach command counted against the limit? If I do a foreach over - let's say - 50 fields, will it consume 50 searches? Will they be all run in parallel or will they be sequenced somehow? Any good doc describing this?
Hi, I have some data which spans multiple systems example below: "system" "app" "fld1" "fld2" "fld3" sys1         appA   1           0          0 sys1         appA   0           0         0 sy... See more...
Hi, I have some data which spans multiple systems example below: "system" "app" "fld1" "fld2" "fld3" sys1         appA   1           0          0 sys1         appA   0           0         0 sys1        appB    0          1 What I'm trying to do is create a generic dashboard so I would need to rename the fields based on the "app" value. So something similar to: when app=="appA" rename "fld1" as "appAfld1",  rename "fld2" as "appAfld2" when app=="appB" rename "fld1" as "appBfld1" Then in a table only show the renamed fields, so a conditional table statement again based on the "app" value. Any ideas on how/if that can be achieved?  Alternately I just create separate dashboards but a lot of repetition in that so I suspect there is a way to do it. Thanks in advance for any ideas.