All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Let's say my proxy is zscaler. Now was able to fetch logs for rule "Uncategorised/Unknown URL" with vendor_signature as "Request method cautioned". This capture associated user who received a warnin... See more...
Let's say my proxy is zscaler. Now was able to fetch logs for rule "Uncategorised/Unknown URL" with vendor_signature as "Request method cautioned". This capture associated user who received a warning when he tried accessing suspicious/malicious page. He still continued to take the risk and access the URL. So, the idea is to build a usecase that would captured these warning pages followed by successful connection towards the page.
Might be that those particular kinds of sources are not covered by any ready-made addons. Splunk-supported Add-ons usually have their documentation on https://docs.splunk.com/ Third-party addons - ... See more...
Might be that those particular kinds of sources are not covered by any ready-made addons. Splunk-supported Add-ons usually have their documentation on https://docs.splunk.com/ Third-party addons - well, here you're on your own and on mercy of the addon creator.
And what data you have that shows this scenario?
Just to _not_ get you into habit of writing bad searches. The one you wrote can be easily rewritten not to use appendcols and subsearch. For example - like this index=abc source=def | eval is_actio... See more...
Just to _not_ get you into habit of writing bad searches. The one you wrote can be easily rewritten not to use appendcols and subsearch. For example - like this index=abc source=def | eval is_action_buy=if(Action="Buy",1,null()) | stats count AS Total count(is_action_buy) AS Buy | eval buy_ratio=Buy/Total  
top Action will only give you the percentage of the non-null values, whereas Total will include null values
Hi @SplunkSkunk88, Understanding how hardware, software, networks, etc. work is important, but without that background, adding a process-focused cybersecurity certification could land you a non-tech... See more...
Hi @SplunkSkunk88, Understanding how hardware, software, networks, etc. work is important, but without that background, adding a process-focused cybersecurity certification could land you a non-technical support role adjacent to engineers, threat hunters, penetration testers, analysts, etc. If you have a business background in a specific market or industry, moving laterally into that industry's cybersecurity segment could increase your chances. Don't give up! If a particular technical area interests you, e.g. cloud administration, add that to your Splunk work.
Hey, Can someone please help me in building a query for user accessing webpage despite warning sign from proxy? @splunk 
Hi @gcusello, If you're receiving Elastic Common Schema (ECS) events in JSON format, i.e. Logstash tcp output plugin to Splunk raw tcp input , then a combination of search-time automatic field extra... See more...
Hi @gcusello, If you're receiving Elastic Common Schema (ECS) events in JSON format, i.e. Logstash tcp output plugin to Splunk raw tcp input , then a combination of search-time automatic field extractions, field aliases, etc. may be preferred if your site distinguishes between the Splunk administrator and Splunk knowledge manager functions; otherwise, I would transform the events using TRANSFORMS or RULESET. For example, given a source ECS event in _raw:   { "@timestamp": "2023-11-26T14:40:57.209Z", "@metadata": { "beat": "winlogbeat", "type": "_doc", "version": "8.11.1" }, "event": { "action": "Sensitive Privilege Use", "created": "2023-11-26T16:52:21.698Z", "code": "4673", "kind": "event", "provider": "Microsoft-Windows-Security-Auditing", "outcome": "failure" }, "log": { "level": "information" }, "message": "A privileged service was called.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-21-**********-**********-**********-1234\n\tAccount Name:\t\tuser\n\tAccount Domain:\t\tCONTOSO\n\tLogon ID:\t\t0x1960F1B\n\nService:\n\tServer:\tSecurity\n\tService Name:\t-\n\nProcess:\n\tProcess ID:\t0x37d0\n\tProcess Name:\tC:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe\n\nService Request Information:\n\tPrivileges:\t\tSeProfileSingleProcessPrivilege", "host": { "os": { "kernel": "10.0.22621.2715 (WinBuild.160101.0800)", "build": "22621.2715", "type": "windows", "platform": "windows", "version": "10.0", "family": "windows", "name": "Windows 11 Pro" }, "id": "6d403ce9-3f79-4551-b651-4d3eb6c53bc9", "ip": [ "192.0.2.1" ], "mac": [ "ff-ff-ff-ff-ff-ff", ], "name": "my-pc", "hostname": "my-pc", "architecture": "x86_64" }, "ecs": { "version": "8.0.0" }, "agent": { "version": "8.11.1", "ephemeral_id": "080792d1-f86c-4a5d-9c46-265212f944f7", "id": "287205c6-b0d0-46f5-875a-1bcc6e013cf2", "name": "my-pc", "type": "winlogbeat" }, "winlog": { "provider_guid": "{54849625-5478-4994-a5ba-3e3b0328c30d}", "channel": "Security", "opcode": "Info", "process": { "pid": 4, "thread": { "id": 940 } }, "provider_name": "Microsoft-Windows-Security-Auditing", "event_data": { "Service": "-", "ProcessId": "0x37d0", "SubjectUserSid": "S-1-5-21-**********-**********-**********-1234", "SubjectDomainName": "MY-PC", "ObjectServer": "Security", "PrivilegeList": "SeProfileSingleProcessPrivilege", "ProcessName": "C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe", "SubjectUserName": "user", "SubjectLogonId": "0x1960f1b" }, "keywords": [ "Audit Failure" ], "computer_name": "my-pc", "record_id": 383388, "api": "wineventlog", "event_id": "4673", "task": "Sensitive Privilege Use" } }   we can use INGEST_EVAL to reformat _raw as WinEventLog: # transforms.conf [ecs-to-wineventlog] INGEST_EVAL = _raw:=strftime(strptime(json_extract(_raw, "event.created"), "%Y-%m-%dT%H:%M:%S.%N%Z"), "%m/%d/%Y %I:%M:%S %p").urldecode("%0a")."LogName=".json_extract(_raw, "winlog.channel").urldecode("%0a")."EventCode=".json_extract(_raw, "winlog.event_id").urldecode("%0a")."ComputerName=".json_extract(_raw, "winlog.computer_name").urldecode("%0a")."RecordNumber=".json_extract(_raw, "winlog.record_id").urldecode("%0a")."Keywords=".json_extract(_raw, "winlog.keywords{}").urldecode("%0a")."TaskCategory=".json_extract(_raw, "winlog.task").urldecode("%0a")."OpCode=".json_extract(_raw, "winlog.opcode").urldecode("%0a")."Message=".json_extract(_raw, "message") We can use the same INGEST_EVAL or a separate one (before _raw is transformed) to extract _time from event.created, set sourcetype, etc. Note that my sample event does not have direct translations of EventType, SourceName, and Type. The output would be similar to a WinEventLog input with the related suppress_* settings set to true. ECS does have event.outcome and log.level fields; however, they've been normalized for Elastic. The ECS event.provider field is the internal Windows event log provider/source name, e.g. Microsoft-Windows-Security-Auditing, not the rendered name, e.g. Microsoft Windows security auditing. Translating to XmlWinEvnetLog is similar, but without access to the mvmap() function in INGEST_EVAL, construction of the <EventData><Data Name="Foo">Bar</Data></EventData> array wouldn't be dynamic. A search-time translation of EventData might look like this:   | eval _raw="<EventData>".mvjoin(mvmap(split(replace(json_extract(_raw, "winlog.event_data"), "\",\"", "\"}".urldecode("%08")."{\""), urldecode("%08")), "<Data Name=\"".mvjoin(json_array_to_mv(json_keys(_raw)), "")."\">".replace(replace(replace(replace(replace(json_extract(_raw, mvjoin(json_array_to_mv(json_keys(_raw)), "")), "&", "&amp;"), "\"", "&quot;"), "'", "&apos;"), "<", "&lt;"), ">", "&gt;")."</Data>"), "")."</EventData>"   --but it isn't very maintainable. It also doesn't help with your problem; it's just fun.  (Edit: The community editor may have stripped some of the characters from my examples. Apologies if they don't work as shown! This edit will also remove syntax highlighting--old community bug!) Some XML elements are not present in every event, so we would need to wrap segments in coalesce() to make them optional during extraction, e.g. coalesce(json_extract(_raw, "winlog.some_missing_field"), ""), or just inject an empty element, e.g. <Security />. The Windows Event schema is documented at <https://learn.microsoft.com/en-us/windows/win32/wes/eventschema-schema>.
Hi, Why the below two queries giving me different percentage values? I checked the total count and count for Action=Sell is same. Am I missing something here? index=abc source=def | top Action   ... See more...
Hi, Why the below two queries giving me different percentage values? I checked the total count and count for Action=Sell is same. Am I missing something here? index=abc source=def | top Action   This gives me 49.7 % for Action=Buy ================================================ index=abc source=def | stats count as Total | appendcols [ search index=abc source=def | search Action=Buy | stats count as Buy] | eval Percent_Buy=round((Buy/Total)*100,2)   This gives me 27.7 % for Action=Buy
Thank you so much!
No, you do not need any Splunk software for Splunk training or exams.  Training courses will provide any necessary software and exams require no Splunk software.  Experience with Splunk software is h... See more...
No, you do not need any Splunk software for Splunk training or exams.  Training courses will provide any necessary software and exams require no Splunk software.  Experience with Splunk software is helpful, but not required.
That vulnerability has a CVSS vector of CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:C/C:H/I:H/A:H, which means it requires network access and credentials to exploit.  If your Splunk is not accessible by outsiders... See more...
That vulnerability has a CVSS vector of CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:C/C:H/I:H/A:H, which means it requires network access and credentials to exploit.  If your Splunk is not accessible by outsiders then your vulnerability is lower.  See https://advisory.splunk.com/advisories/SVD-2023-1104 for everything public about the vulnerability.
Is the Splunk Software Required for Splunk Training & Certification Exams?
Does anyone know the likelihood of landing an entry-level cybersececurity job with only a Splunk cybersecurity certification & no other cybersecurity nor computer certifications (nor background nor e... See more...
Does anyone know the likelihood of landing an entry-level cybersececurity job with only a Splunk cybersecurity certification & no other cybersecurity nor computer certifications (nor background nor education in IT)?
i have setted the java environment correctly. both $PATH and $JAVA_HOME are accessible. i am able to connect to jmx server with same url,username,password from jconsole. but i am unable to connect fr... See more...
i have setted the java environment correctly. both $PATH and $JAVA_HOME are accessible. i am able to connect to jmx server with same url,username,password from jconsole. but i am unable to connect from splunk addon for jmx. below thing i found from  $SPLUNK_HOME/var/log/splunk/jmx.log 2023-11-26 16:03:54,246 WARNING pid=13767 tid=MainThread file=jmx.py:<module>:86 | Failed to open the invoke the input. Reason: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_jmx/bin/jmx.py", line 82, in <module> java_const.JAVA_MAIN_ARGS File "/opt/splunk/lib/python3.7/subprocess.py", line 800, in __init__ restore_signals, start_new_session) File "/opt/splunk/lib/python3.7/subprocess.py", line 1567, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'java': 'java'
Hi @MayurMangoli, in this case, you have to migrate them one by one. You could start to find all the knowledge objects you have in local folders in the ES apps. Then you should list the enabled Co... See more...
Hi @MayurMangoli, in this case, you have to migrate them one by one. You could start to find all the knowledge objects you have in local folders in the ES apps. Then you should list the enabled Correlation Searches. Anyway, you could copy the old SH to the new DS and then deploy ES to the new SH. Anyway, I don't like to manage SH with the DS. Ciao. Giuseppe
Any time multiple words are used for the same meaning, whether in different languages or the same language, they should be normalized before use.   I like to use the case function for that. | eval s... See more...
Any time multiple words are used for the same meaning, whether in different languages or the same language, they should be normalized before use.   I like to use the case function for that. | eval status=case(status="Approved" OR status="Approuvé", "Approved", 1==1, "Denied")  As for separator words in different languages, just incorporate them into your regex | rex "(from|du) (?<from_time>.+?) (until|au) (?<until_time>.+"
1. Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ 2. Use the distinct_count funct... See more...
1. Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/ 2. Use the distinct_count function instead of count to get the number of unique patches.
hello @gcusello,  thanks for the update, but in my case i have a issue with the old SH, as my old SH was using as the deployment server as well, and for the new one i have made both a diffrenet serv... See more...
hello @gcusello,  thanks for the update, but in my case i have a issue with the old SH, as my old SH was using as the deployment server as well, and for the new one i have made both a diffrenet server, so i have was having trouble to migrate the Enterprise security from old to New SH. where i have copied the SplunkEnterpisesSecuritySuite form the old SH /apps directory and pasted to new SH will it also migrate the Usecases?.  
Hello I have this query : index="bigfixreport" | timechart count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 2 AN... See more...
Hello I have this query : index="bigfixreport" | timechart count(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 2 AND totalNumberOfPatches <= 5, "Low Exposure", totalNumberOfPatches >= 6 AND totalNumberOfPatches <= 9, "Medium Exposure", totalNumberOfPatches >= 10, "High Exposure", totalNumberOfPatches == 1, "Compliant", 1=1, "<not reported>" ) | eval category=exposure_level | xyseries category exposure_level totalNumberOfPatches   The purpose of this query is to count the number of patches for each computer name and visualize it in pie chart - one for each category and color each pie in different color ("Low Exposure" - blue, "Medium Exposure" - yellow, "High Exposure" - red, "Compliant" - green, <not reported> - gray) I have few problems 1. since i count numbers, <not reported> not count and does not display in the list 2. i have new file every day and it is possible the for few day the number of patches for some computer will be the same (for example, it will be 3 patches for specific computer for 5 days) if i just count the number of patches it will count 3+3+3+3+3 and it is not true since its the same 3 patches