All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

any ideas? i am still not able to make it work
Hi @pavithra you could try with the ceiling function: ceiling or ceil  example: | eval day_of_month=strftime(_time, "%d") | eval day_of_week=strftime(_time, "%A") | eval week_of_month=ceil(day... See more...
Hi @pavithra you could try with the ceiling function: ceiling or ceil  example: | eval day_of_month=strftime(_time, "%d") | eval day_of_week=strftime(_time, "%A") | eval week_of_month=ceil(day_of_month/7) | where day_of_week="Tuesday" AND week_of_month=2   best regards,
Hi @Kaushaas, all the hints I can think were checked, open a case to Splunk Support, there isn't any other solution. Ciao. Giuseppe
Hi @gcusello  I have got the power user access role but i still dont see Edit permissions option . Anything that you can suggest ?
Hello @mahesh27, You can write an eval condition to rename the NULL value as whatever string you wish. | eval service_code=if(service_code="NULL","Non-Servicecode",service_code)  Just append the a... See more...
Hello @mahesh27, You can write an eval condition to rename the NULL value as whatever string you wish. | eval service_code=if(service_code="NULL","Non-Servicecode",service_code)  Just append the above eval condition to your SPL query and it should work.  Thanks, Tejas.
Hi @joock3r , id depends on the data source: if you have a lookup containing two columns (country and campus), you can fiter the second dopdown using the choice in the first, somthing like this: |... See more...
Hi @joock3r , id depends on the data source: if you have a lookup containing two columns (country and campus), you can fiter the second dopdown using the choice in the first, somthing like this: | inputookup your_lookup.csv WHERE country=$token1$ | fields campus if instead you have only one list (USA 1, USA 2, Romania 1, Romania 2, Turkey 1, Turkey2), you should extract the country from the list using a regex, e.g. something like this (having only one column called campus, containing always the country and a number): first dropdown | inputookup your_lookup.csv | rex field=campus "^(?<country>[^0-9]+)\d+" | fields country second dropdown: | inputookup your_lookup.csv | rex field=campus "^(?<country>[^0-9]+)\d+" | search country="$token1$" | fields campus Ciao. Giuseppe  
Hi @paragg , is you main dashboard only a menù or does it contain also panels with data to drildown in the othe dashboards? in the first case, you have to use HTML tags to create links to the othe ... See more...
Hi @paragg , is you main dashboard only a menù or does it contain also panels with data to drildown in the othe dashboards? in the first case, you have to use HTML tags to create links to the othe dashboards, something like this: <dashboard version="1.1"> <label>Home Page</label> <row> <panel> <html> <h1>Title Panel 1</h1> <table border="0" cellpadding="10" align="center"> <th> <tr> <td align="center"> <a href="/app/my_app/dashboard1"> <img style="width:80px;border:0;" src="/static/app/my_app/image1.png"/> </a> </td> <td align="center"> <a href="/app/my_app/dashboard2"> <img style="width:80px;border:0;" src="/static/app/my_app/image2"/> </a> </td> </tr> <tr> <td align="center"> <a href="/app/my_app/dashboard1"> Title Dashboard 1 </a> </td> <td align="center"> <a href="/app/my_app/dashboard2"> Title Dashboard 2 </a> </td> </tr> </th> </table> </html> </panel> </row> </dashboard> If instead you have to create a drilldown, please folow the instructions at https://docs.splunk.com/Documentation/Splunk/latest/Viz/DrilldownIntro Ciao. Giuseppe
Hi @payl_chdhry , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @pavithra  at first it isn't so clear with timeframe you want to display, but this is inside your search, so if it's the previous month, you could run something like this: <your_search> earliest... See more...
Hi @pavithra  at first it isn't so clear with timeframe you want to display, but this is inside your search, so if it's the previous month, you could run something like this: <your_search> earliest=-mon@mon latest=@mon | stats count BY key and schedule your search using this cron: 0 0 8-15 * 3 Ciao. Giuseppe
thank you for your reply, but this will double the consumption of CPU and memory resources.
Just pointing out here that the statement | inputlookup test.csv OR inputlookup test2.csv is not valid Splunk - you cannot do two inputlookup commands like that.
It's pretty straightforward to do that | makeresults format=csv data="VIN,MAKE,MODEL 1234ABCD,FORD,GT ABCD1234,DODGE,VIPER 1A2B3C4D,CHEVROLET,CORVETTE A1B2C3D4,AUDI," | eval sourcetype="autos" | app... See more...
It's pretty straightforward to do that | makeresults format=csv data="VIN,MAKE,MODEL 1234ABCD,FORD,GT ABCD1234,DODGE,VIPER 1A2B3C4D,CHEVROLET,CORVETTE A1B2C3D4,AUDI," | eval sourcetype="autos" | append [ | makeresults format=csv data="SN,MANUFACTURER,PRODUCT 1234ABCD,FORD,GT ABCD1234,DODGE,CARAVAN 1A2B3C4D,CHEVY,CORVETTE A1B2C3D4, ,A8" | eval sourcetype="cars" ] ``` Above is sample data setup, but imagine your data above has come from index=your_index sourcetype=autos OR sourcetype=cars ``` ``` Now use VIN as the common field - there are actually many ways to do the same thing, but what you are doing here is to make the dc_XXX fields ones to be counted for uniqueness. ``` | eval VIN=coalesce(VIN, SN), dc_makes=coalesce(MAKE, MANUFACTURER), dc_models=coalesce(MODEL, PRODUCT) ``` Here there stats values collects all the original data - you may want to add a | fields statement here to limit to the fields you want It also counts the unique values of the dc_* fields which is the make and model from whichever sourcetype ``` | stats values(*) as * dc(dc_*) as dc_* by VIN ``` And now this will find your mismatch items ``` | where dc_makes>1 OR dc_models>1 | fields - sourcetype dc_* Hope this helps
In case you haven't noticed, this is a really old thread. Your question might not get the visibility you want in this thread. Try starting a new thread describing your problem in the Getting Data In ... See more...
In case you haven't noticed, this is a really old thread. Your question might not get the visibility you want in this thread. Try starting a new thread describing your problem in the Getting Data In section of this forum.
1. This is a very old thread. Starting a new one would give you more visibility. 2. Well, not every type of input supports this parameter so I'm not sure if specifying it here is syntactically corre... See more...
1. This is a very old thread. Starting a new one would give you more visibility. 2. Well, not every type of input supports this parameter so I'm not sure if specifying it here is syntactically correct. Try and see (with btool check)
Adding to that answer - your search term if you just search for "1.2.3.4" might not encompass a whole major-breaker-delimited search term but be somewhere in the middle  of a "word" delimited by mino... See more...
Adding to that answer - your search term if you just search for "1.2.3.4" might not encompass a whole major-breaker-delimited search term but be somewhere in the middle  of a "word" delimited by minor breakers - like "version.1.2.3.4". So Splunk searches for 1, 2, 3 and 4 separately and checks if the events matching all of those partial terms match the literal search term. If you explicitly tell it to find TERM(1.2.3.4), it will find only those events for which the term 1.2.3.4.
For Splunk to Splunk connectivity Splunk uses aptly named s2s (splunk-to-splunk) protocol. It can be either used "raw" or embedded into HTTP.
Hi @payl_chdhry, Splunk uses the proprietary Splunk-To-Splunk (S2S) protocol over TCP between tcpout outputs and splunktcp inputs, optionally encapsulated in TLS. Splunk also natively supports HTTP... See more...
Hi @payl_chdhry, Splunk uses the proprietary Splunk-To-Splunk (S2S) protocol over TCP between tcpout outputs and splunktcp inputs, optionally encapsulated in TLS. Splunk also natively supports HTTP output and input via HTTP Event Collector, syslog output, raw TCP output,   and raw TCP and UDP input.
In some of my earlier posts, if you see them, I incorrectly stated unarchive_cmd would gives us checkpoint tracking etc. It does not. When the file is modified, it is re-read from the beginning. This... See more...
In some of my earlier posts, if you see them, I incorrectly stated unarchive_cmd would gives us checkpoint tracking etc. It does not. When the file is modified, it is re-read from the beginning. This is still useful for dropping complete files into a monitored directory. Also: Don't forget to either keep your SEDCMD setting intact or add code to the Python script to mask the password. We'll assume all of this is being done for research and offline analysis.
My first instinct is to use the mvmap eval function to iterate over the alert_data.csv.content array and join the results into a new JSON array: | makeresults | eval _raw="{\"alert_data\": {\"domain... See more...
My first instinct is to use the mvmap eval function to iterate over the alert_data.csv.content array and join the results into a new JSON array: | makeresults | eval _raw="{\"alert_data\": {\"domain\": \"abc.com\", \"csv\": {\"id\": 12345, \"name\": \"credentials.csv\", \"mimetype\": \"text/csv\", \"is_safe\": true, \"content\": [{\"username\": \"test1@abc.com\", \"password\":\"1qaz@WSX#EDC\"}, {\"username\": \"test2@abc.com\", \"password\":\"NotComplex\"}]}}}" | eval _raw=json_set(json(_raw), "alert_data.csv.content", json("[".mvjoin(mvmap(json_array_to_mv(json_extract(json(_raw), "alert_data.csv.content{}")), json_set(_raw, "is_password_meet_complexity", if(len(json_extract(_raw, "password")) >= 8 AND (if(match(json_extract(_raw, "password"), "[[:digit:]]"), 1, 0) + if(match(json_extract(_raw, "password"), "[[:upper:]]"), 1, 0) + if(match(json_extract(_raw, "password"), "[[:lower:]]"), 1, 0) + if(match(json_extract(_raw, "password"), "[[:punct:]]"), 1, 0)) >= 3, "Yes", "No"))), ",")."]")) However, mvmap is not supported by INGEST_EVAL: "The following search-time eval functions are not currently supported at index-time with INGEST_EVAL: mvfilter, mvmap, searchmatch, now, and commands." See https://docs.splunk.com/Documentation/Splunk/latest/Data/IngestEval. To work around the missing functionality, we must analyze the input stream using an external process. We have several options available, but my preference lately for file (monitor) inputs is the props.conf unarchive_cmd setting. unarchive_cmd streams data to an external command over stdin and sends the command's stdout stream to the Splunk ingest pipeline. If we assume your file input is newline delimited JSON, unarchive_cmd allows us to read each object from stdin, process each content array item individually, and write the resulting object to stdout. Given alert_data.ndjson: {"alert_data": {"domain": "abc.com", "csv": {"id": 12345, "name": "credentials1.csv", "mimetype": "text/csv", "is_safe": true, "content": [{"username": "test1@abc.com", "password":"1qaz@WSX#EDC"}, {"username": "test2@abc.com", "password":"NotComplex"}]}}} {"alert_data": {"domain": "abc.com", "csv": {"id": 67890, "name": "credentials2.csv", "mimetype": "text/csv", "is_safe": true, "content": [{"username": "test3@abc.com", "password":"passw0rd"}, {"username": "test4@abc.com", "password":"j#4kS.0e"}]}}} let's introduce an alert_data source type and construct inputs.conf and props.conf: # inputs.conf [monitor:///tmp/alert_data.ndjson] sourcetype = alert_data # props.conf [source::...alert_data.ndjson] unarchive_cmd = python $SPLUNK_HOME/bin/scripts/preprocess_alert_data.py unarchive_cmd_start_mode = direct sourcetype = preprocess_alert_data NO_BINARY_CHECK = true [preprocess_alert_data] invalid_cause = archive is_valid = False LEARN_MODEL = false [alert_data] DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) Now let's write $SPLUNK_HOME/bin/scripts/preprocess_alert_data.py to read, process, and write JSON objects: import json import re import sys for line in sys.stdin: line = line.strip() if not line: continue else: try: json_object = json.loads(line.rstrip()) for item in json_object["alert_data"]["csv"]["content"]: meets_length_requirement = len(item["password"]) >= 8 digit_score = 1 if re.search(r"\d", item["password"]) else 0 upper_score = 1 if re.search(r"[A-Z]", item["password"]) else 0 lower_score = 1 if re.search(r"[a-z]", item["password"]) else 0 punct_score = 1 if re.search(r"[^a-zA-Z0-9\s]", item["password"]) else 0 meets_complexity_requirement = True if (digit_score + upper_score + lower_score + punct_score) >= 3 else False if meets_length_requirement and meets_complexity_requirement: item["is_password_meet_complexity"] = "Yes" else: item["is_password_meet_complexity"] = "No" print(json.dumps(json_object)) except Exception as err: print(err, file=sys.stderr) print(line) On a full instance of Splunk Enterprise, i.e. a heavy forwarder, Splunk will use its local copy of Python. On a universal forwarder, we'll need to install Python 3.x and make sure the executable is in the path. At scale, this solution is better implemented as a modular input, but that's a separate topic for a larger discussion.