All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello! I have a really simple unix based shell script that returns info about the httpd (Apache) service.  The script is encapsulated in an input, so the printf statement becomes the event.  Each ev... See more...
Hello! I have a really simple unix based shell script that returns info about the httpd (Apache) service.  The script is encapsulated in an input, so the printf statement becomes the event.  Each event is one line only. Here is an indexed event coming from the UF (with highlights that I will explain successively): For some reason the sourcetype is not working since _time is not what I specify, rather it is half from the field I want (timestamp in green) and half some text in the payload that i do not want (date in red). The sourcetype is currently this (it has gone through many evolutions): [linux:httpdinfo] SHOULD_LINEMERGE = false KV_MODE = auto MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_FORMAT = %Y-%m-%d %H:%M:%S %z No matter what I try I cannot seem to get it to work. Could somebody give me a push in the right direction? Thanks! Andrew
recently we onboarded these logs but most of the fields are not extracted though these values are mentioned with =. I am trying to extract batch_id , tran_id and pricing hashcode and rules hashcode. ... See more...
recently we onboarded these logs but most of the fields are not extracted though these values are mentioned with =. I am trying to extract batch_id , tran_id and pricing hashcode and rules hashcode.  I tried to extract from GUI but i am seeing lot of mismatches. can anyone help me with this. here are sample logs {"logGroup": "ldcs-devl-eb-06-webapp-Application", "logStream": "ip-10-108-18-243 (i-004009051755596bb) - ld-pricing.log", "aws_acctid": "189693026861", "aws_region": "us-east-1", "splunkdata": {"shard_id": "000000000020", "splkhf": "spitsi-acpt-log-heavy-4", "rvt": 1642014308933}, "lifecycle": "devl-shared", "aws_appshortname": "ldcs", "appcode": "FVV", "cwmessage": "2022-01-12 14:05:02.322|[DefaultThreadPool-18] LD-PRICING-INFO c.f.l.pricing.mapper.DealSetsMapper STARTOFFIELDS|component=LD-PRICING|user_id=c9273wne|seller_id=165700007|session_id=D86C9BAF3F308C7838E4A52BC0DA0938.LDNG-UI-cl02|tran_id=9a6e8ba3-2c01-4b18-bbfb-88a854bbdb85|batch_id=9a6e8ba3-2c01-4b18-bbfb-88a854bbdb85|dealset_id=116784|execution_type=WholeLoan|loan_count=1|time=|messageId=ID:SOADevl-ems08.752D61D05D2DBE2E02:414|ENDOFFIELDS - Pricing Info ~ Pricing Hashcode: 1761264532 - Rules Hashcode: -1500207091 - uniqueClientDealIdentifier: a37801e4-dbe6-4c3a-bc26-17d1a78a0b28 - sellerLoanIdentifier: BTP22_0111_B10 - poolIdentifier: null - investorCommitmentIdentifier: 116784 - sellerId: 165700007 ", "cwtimestamp": 1641996302000} {"logGroup": "ldcs-devl-eb-06-webapp-Application", "logStream": "ip-10-108-18-243 (i-004009051755596bb) - ld-pricing.log", "aws_acctid": "189693026861", "aws_region": "us-east-1", "splunkdata": {"shard_id": "000000000020", "splkhf": "spitsi-acpt-log-heavy-4", "rvt": 1642014334358}, "lifecycle": "devl-shared", "aws_appshortname": "ldcs", "appcode": "FVV", "cwmessage": "2022-01-12 14:05:27.035|[DefaultThreadPool-20] LD-PRICING-INFO c.f.l.pricing.mapper.DealSetsMapper STARTOFFIELDS|component=LD-PRICING|user_id=c9273wne|seller_id=165700007|session_id=D86C9BAF3F308C7838E4A52BC0DA0938.LDNG-UI-cl02|tran_id=751b1112-0511-4dbd-b94c-a6409c23b20d|batch_id=751b1112-0511-4dbd-b94c-a6409c23b20d|dealset_id=116784|execution_type=WholeLoan|loan_count=1|time=|messageId=ID:SOADevl-ems08.752D61D05D2DBE2E0A:457|ENDOFFIELDS - Pricing Info ~ Pricing Hashcode: 1761264532 - Rules Hashcode: -1500207091 - uniqueClientDealIdentifier: a37801e4-dbe6-4c3a-bc26-17d1a78a0b28 - sellerLoanIdentifier: BTP22_0111_B10 - poolIdentifier: null - investorCommitmentIdentifier: 116784 - sellerId: 165700007 ", "cwtimestamp": 1641996327000}
Have a few Windows server that I need to enable file monitoring on to be sending logs to Splunk Ent. server. I could use your hands-on experience please, including any SPLs, do's & don't. I found thi... See more...
Have a few Windows server that I need to enable file monitoring on to be sending logs to Splunk Ent. server. I could use your hands-on experience please, including any SPLs, do's & don't. I found this link but is too generic. https://docs.splunk.com/Documentation/Splunk/8.2.4/Data/MonitorfilesystemchangesonWindows Thanks a million in advance. Have a great week ahead.
I am trying to configure a LB for a Splunk SHC, but I am unable to find any resources on how to do it. It is my first time that I have ever messed with any load-balancers, so I need step-by-step inst... See more...
I am trying to configure a LB for a Splunk SHC, but I am unable to find any resources on how to do it. It is my first time that I have ever messed with any load-balancers, so I need step-by-step instructions
Helloo, i am using the MLTK, i get this error Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found. It will be great if the tea... See more...
Helloo, i am using the MLTK, i get this error Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found. It will be great if the team can help me to solve this.  
Hi All, I am writing a playbook that  sends an automated email when a case is opened in phantom.   I know If you are doing a manual promotion (via GUI), then you would need to have a REST query get e... See more...
Hi All, I am writing a playbook that  sends an automated email when a case is opened in phantom.   I know If you are doing a manual promotion (via GUI), then you would need to have a REST query get executed from a playbook hitting the container endpoint and looking for "container_type": "case".  Then you would just have a format block to populate the REST results and have a connected send email action via SMTP.  what are the steps to get a REST query get executed?   Brandy
I'm trying to get a new sourcetype (NetApp user-level audit logs, exported as XML) to work, and I think my fields.conf tokenizer is breaking things. But I'm not really sure how, or why, or what to do... See more...
I'm trying to get a new sourcetype (NetApp user-level audit logs, exported as XML) to work, and I think my fields.conf tokenizer is breaking things. But I'm not really sure how, or why, or what to do about it. The raw data is XML, but I'm not using KV_MODE=xml because that doesn't properly handle all the attributes. So, I've got a bunch of custom regular expressions, the true backbone of all enterprise software. Here's a single sample event (but you can probably disregard most of it, it's just here for completeness): <Event><System><Provider Name="NetApp-Security-Auditing" Guid="{guid-edited}"/><EventID>4656</EventID><EventName>Open Object</EventName><Version>101.3</Version><Source>CIFS</Source><Level>0</Level><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><Result>Audit Success</Result><TimeCreated SystemTime="2022-01-12T15:42:41.096809000Z"/><Correlation/><Channel>Security</Channel><Computer>server-name-edited</Computer><ComputerUUID>guid-edited</ComputerUUID><Security/></System><EventData><Data Name="SubjectIP" IPVersion="4">1.2.3.4</Data><Data Name="SubjectUnix" Uid="1234" Gid="1234" Local="false"></Data><Data Name="SubjectUserSid">S-1-5-21-3579272529-1234567890-2280984729-123456</Data><Data Name="SubjectUserIsLocal">false</Data><Data Name="SubjectDomainName">ACCOUNTS</Data><Data Name="SubjectUserName">davidsmith</Data><Data Name="ObjectServer">Security</Data><Data Name="ObjectType">Directory</Data><Data Name="HandleID">00000000000444;00;002a62a7;0d3d88a4</Data><Data Name="ObjectName">(Shares);/LogTestActivity/dsmith/wordpress-shared/plugins-shared</Data><Data Name="AccessList">%%4416 %%4423 </Data><Data Name="AccessMask">81</Data><Data Name="DesiredAccess">Read Data; List Directory; Read Attributes; </Data><Data Name="Attributes">Open a directory; </Data></EventData></Event> My custom app's props.conf has a couple dozen lines like this, for each element I want to be able to search on: EXTRACT-DesiredAccess = <Data Name="DesiredAccess">(?<DesiredAccess>.*?)<\/Data> EXTRACT-HandleID = <Data Name="HandleID">(?<HandleID>.*?)<\/Data> EXTRACT-InformationRequested = <Data Name="InformationRequested">(?<InformationRequested>.*?)<\/Data> This works as you'd expect, except for a couple of fields where they're composites. This is most noticeable in the DesiredAccess element, which in our example looks like: <Data Name="DesiredAccess">Read Data; List Directory; Read Attributes; </Data> Thus you get a single field with "Read Data; List Directory; Read Attributes; " and if you only need to look for, say, "List Directory," you have to get clever with your searches. So, I added a fields.conf file with this in it: [DesiredAccess] TOKENIZER = \s?(.*?); When I paste the 'raw' contents of that field, and that regex, into a tool like regex101.com, it works and returns the expected results. Similarly, it also works if I remove it from fields.conf, and put it in as a makemv command: index=nonprod_pe | makemv tokenizer="\s?(.*?);" DesiredAccess With the TOKENIZER element in fields.conf, the DesiredAccess attribute just doesn't populate, period. So I assume it's the problem. (Since this is in an app, the app's metadata does contain explicit "export = system" lines for both [props] and [fields]. And the app is on indexers and search heads. Probably doesn't need to be in both places, but hey I'm still learning...) So, what am I doing wrong with my fields.conf tokenizer, that's caused it to fail completely to identify any elements?
Hello, I am trying to integrate Automic Automation Intelligence tool  with Splunk. https://www.broadcom.com/products/software/automation/automic-automation-intelligence Basically this tool is read... See more...
Hello, I am trying to integrate Automic Automation Intelligence tool  with Splunk. https://www.broadcom.com/products/software/automation/automic-automation-intelligence Basically this tool is reads Autosys database to download the job run history like success, failures etc. Want to integrate with splunk so that I can create a dashboard with the job run history. The job run history should be fetched real time. Is there any way to do that? Any API calls or any other way? Please suggest.  
Hi All,    I'm tweaking my inputs.conf file to exclude some events for the Windows Security log. I'm filtering EventCode 4688, by message.  For compatibility reasons, I want to use the same inpu... See more...
Hi All,    I'm tweaking my inputs.conf file to exclude some events for the Windows Security log. I'm filtering EventCode 4688, by message.  For compatibility reasons, I want to use the same inputs.conf file for all windows machines.  But Windows 11 has tweaked a couple event logs, and one of those is 4688. For Windows 10 and below the following blacklist is working as expected: blacklist1 = EventCode="4688" Message="Token Elevation Type:(?!\s*%%1937)" This filters everything except %%1937. But this won't work for Windows 11, because they have changed the Token Elevation Type to "TokenElevationTypeFull" for the previously "%%1937".  Therefore if a windows10 inputs.conf file ends up on a windows 11, it blacklists all the 4688 logs. So simply, I would like to add the 2 lines together on a single line, so that if either TokenElevationType is found, it goes through.  But the "|" operator doesn't seem to be working, or I'm not doing the correct syntax. blacklist1 = EventCode="4688" Message="Token Elevation Type:(?!\s*%%1937)" blacklist1 = EventCode="4688" Message="Token Elevation Type:(?!\s*TokenElevationTypeFull)"   Can anyone help marry these 2 checks with an OR operator?   Thank you
I have two searches: Search A index=my_idx sourcetype=my_st Name=conference Message= joined | stats count by _time Paticipant Conference Display Name Location Protocol Search B index=my_idx ... See more...
I have two searches: Search A index=my_idx sourcetype=my_st Name=conference Message= joined | stats count by _time Paticipant Conference Display Name Location Protocol Search B index=my_idx sourcetype=my_st Name=conference Message= disconnected | stats count by _time Participant Conference Duration  DisplayName Location Protocol I would like create a table that combines the Duration field with all the fields from Search A.  I would then like to include a column for the join time and the disconnect time that correlates to the value of Duration. The output would look like this: Seach C Out Come Participant Conference Join_Time Disconnect_Time Duration DisplayName Location Protocol Thank you, Jason H.
i Want to get the value of 200 as status code and response_time in a table format from the below raw data Status Response_Time 200 0.012052 200 0.103866       Log1 :  \"GET /ac... See more...
i Want to get the value of 200 as status code and response_time in a table format from the below raw data Status Response_Time 200 0.012052 200 0.103866       Log1 :  \"GET /actuator HTTP/2.0\" 200 0 1851 \"-\" \"Mozilla/5.0 (WindowsNT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71Safari/537.36 Edg/97.0.1072.55\" \"10.229.62.179:56886\" \"10.55.6.79:61026\" x_forwarded_for:\"10.229.62.179\" x_forwarded_proto:\"https\" vcap_request_id:\"36c0662d-09e7-467f-774b-391ca2ad337a\" response_time:0.012052gorouter_time:0.000224 Log 2: HTTP/2.0\" 200 0 180 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36\" \"10.229.62.179:54696\" \"10.55.6.79:61026\" x_forwarded_for:\"10.229.62.179\" x_forwarded_proto:\"https\" vcap_request_id:\"8b37b42c-f3b2-4103-5ac2-fb12009cad3f\" response_time:0.103866 gorouter_time:0.000265  
Hi I am trying to create new column in table after extracting information from json data, new column have value "True" or "False" if "toDomain" column data available in  lookup table . Querry: inde... See more...
Hi I am trying to create new column in table after extracting information from json data, new column have value "True" or "False" if "toDomain" column data available in  lookup table . Querry: index="pps_index" sourcetype="pps_messagelog" "filter.routeDirection"=outbound |rex field=envelope.rcpts{} .*@(?<toDomain>.*)|rex field=envelope.from .*@(?<fromDomain>.*)|rename envelope.from as Sender envelope.rcpts{} as Recipient msg.header.subject as Subject msgParts{}.detectedName as Attachment | table Sender Recipient Subject Attachment toDomain lookup file "publicDomain.csv" contains below data as example. publicDomain 123.com 123box.net 123india.com 123mail.cl 123qwe.co.uk 126.com 15meg4free.com 163.com 163.net 169.cc 188.net current output: Sender  Recipient  Subject Attachment toDomain Ruotong_Yin@contractor.amat.com ngarza@littelfuse.com RE: AMAT PO 4513405497 11.26.2021 Littelfuse Inc. text.txt text.html lt po# 4513405497.pdf littelfuse.com Amanda_Mo@amat.com cod.b2b.servicerequest@my344310.mail.crm.ondemand.com RE: [ Ticket: 3018517 ] WF: WF: 25420987000020 & 25420672000020-- 0190-17499W * (1+1) =2EA--- pls create STO from 8665 & 8639 to 8602. thank you! text.txt text.html image005.jpg image006.png image001.jpg image002.jpg image007.jpg my344310.mail.crm.ondemand.com Amanda_Mo@amat.com hfamat.list@bondex.com.cn RE: [ Ticket: 3018517 ] WF: WF: 25420987000020 & 25420672000020-- 0190-17499W * (1+1) =2EA--- pls create STO from 8665 & 8639 to 8602. thank you! text.txt text.html image005.jpg image006.png image001.jpg image002.jpg image007.jpg bondex.com.cn   tme@massgroup.com tme@123box.net Work Order Past Due Notification: WO# 199996 text.txt 123box.net   Desired Output: Sender  Recipient  Subject Attachment toDomain PDVal Ruotong_Yin@contractor.amat.com ngarza@littelfuse.com RE: AMAT PO 4513405497 11.26.2021 Littelfuse Inc. text.txt text.html lt po# 4513405497.pdf littelfuse.com False Amanda_Mo@amat.com cod.b2b.servicerequest@my344310.mail.crm.ondemand.com RE: [ Ticket: 3018517 ] WF: WF: 25420987000020 & 25420672000020-- 0190-17499W * (1+1) =2EA--- pls create STO from 8665 & 8639 to 8602. thank you! text.txt text.html image005.jpg image006.png image001.jpg image002.jpg image007.jpg my344310.mail.crm.ondemand.com False Amanda_Mo@amat.com hfamat.list@bondex.com.cn RE: [ Ticket: 3018517 ] WF: WF: 25420987000020 & 25420672000020-- 0190-17499W * (1+1) =2EA--- pls create STO from 8665 & 8639 to 8602. thank you! text.txt text.html image005.jpg image006.png image001.jpg image002.jpg image007.jpg bondex.com.cn False   tme@massgroup.com tme@123box.net Work Order Past Due Notification: WO# 199996 text.txt 123box.net True   Kindly provide solution to resolve issue.  
In the coldToFrozenExample.py script there is a --search-files-required argument switch that it looks for, and if found will archive additional files instead of deleting them. I don't want to use th... See more...
In the coldToFrozenExample.py script there is a --search-files-required argument switch that it looks for, and if found will archive additional files instead of deleting them. I don't want to use this, but I would like to add my own switch to the script in order to add make it more widely applicable.  However, I'm not sure how to actually call the script with the arguments.  Here is the line from indexes.conf that specifies the script:         coldToFrozenScript = "$SPLUNK_HOME/bin/python" "$SPLUNK_HOME/bin/scripts/coldToFrozen.py"         When Splunk actually makes the call, it automatically inserts the bucket to archive after the script name (it has to do this, because the script searches for the bucket name as the first argument).  So I don't know how I would specify a second argument. If anyone can point me in the right direction, I would very much appreciate it.  Thanks so much.   
Hi, Im having trouble getting the latitude and longitudes for a cluster map to work properly when given computer names with know coordinates. The data in the index doesnt have the lat or lon in it un... See more...
Hi, Im having trouble getting the latitude and longitudes for a cluster map to work properly when given computer names with know coordinates. The data in the index doesnt have the lat or lon in it unfortunately. In this example I am trying to figure out a way to eval against multiple standard naming conventions to assign their latitude and longitude. If i had 5 locations with the corresponding naming conventions where xxxx is a unique identifier within those 5 location and I know the latitude and longitude of each location. How would I go about evaluating every field in the "Computer Name" column for which location it belongs to and then apply the corresponding Lat & Lon so it can be plotted on a cluster map.  See below naming convention and their corresponding Lat and Lon example. Loc1xxxx: Lat 10.1010 Lon -10.10.10, Loc2xxxx: Lat 20.2020 Lon -20.2020, Loc3xxxx: Lat 30.3030 Lon -30.3030, Loc4xxxx: Lat 40.4040 Lon -40.4040, Loc5xxxx: Lat 50.5050 Lon -50.5050 For this example each location with have 5 computers for simplicity sake. Loc10001- Loc10005, etc. Here is what I have so far which will resolve the lat and lon for a single location but I am having trouble figuring out how to expand it to other locations. index="index_name" | dedup "Computer Name" | rename "Computer Name" as WKS | eval lat=if(match(WKS, "Loc1"), "10.1010", "0") | eval lon=if(match(WKS, "Loc1"), "-10.1010", "0") | geostats latfield=lat longfield=lon count  
Hello to everyone, on my indexers I just configured Splunk as a service with systemd, start command works fine but stop command (systemctl stop Splunkd), instead, returns some errors: [root@pe-sec-... See more...
Hello to everyone, on my indexers I just configured Splunk as a service with systemd, start command works fine but stop command (systemctl stop Splunkd), instead, returns some errors: [root@pe-sec-idx-02 system]# systemctl status Splunkd ● Splunkd.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/Splunkd.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2022-01-12 15:18:32 CET; 9s ago Process: 1462 ExecStop=/opt/splunk/bin/splunk _internal_launch_under_systemd (code=exited, status=1/FAILURE) Process: 31484 ExecStop=/bin/sleep 10 (code=exited, status=0/SUCCESS) Process: 31225 ExecStop=/sbin/runuser -l splunk -c /opt/splunk/bin/splunk edit cluster-config -manual_detention on -auth admin:D1c3mbr3Sec (code=exited, status=0/SUCCESS) Process: 20750 ExecStartPost=/bin/bash -c chown -R 1001:1001 /sys/fs/cgroup/memory/system.slice/%n (code=exited, status=0/SUCCESS) Process: 20746 ExecStartPost=/bin/bash -c chown -R 1001:1001 /sys/fs/cgroup/cpu/system.slice/%n (code=exited, status=0/SUCCESS) Process: 20643 ExecStartPost=/sbin/runuser -l splunk -c /opt/splunk/bin/splunk edit cluster-config -manual_detention off -auth admin:D1c3mbr3Sec (code=exited, status=0/SUCCESS) Process: 19156 ExecStartPost=/bin/sleep 60 (code=exited, status=0/SUCCESS) Process: 19155 ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd (code=exited, status=52) Main PID: 19155 (code=exited, status=52) Jan 12 15:12:20 pe-sec-idx-02 splunk[19155]: All installed files intact. Jan 12 15:12:20 pe-sec-idx-02 splunk[19155]: Done Jan 12 15:12:21 pe-sec-idx-02 splunk[19155]: Checking replication_port port [9887]: 2022-01-12 15:12:21.354 +0100 splunkd started (build 7651b7244cf2) Jan 12 15:13:17 pe-sec-idx-02 systemd[1]: Started Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jan 12 15:18:03 pe-sec-idx-02 systemd[1]: Stopping Systemd service file for Splunk, generated by 'splunk enable boot-start'... Jan 12 15:18:16 pe-sec-idx-02 systemd[1]: Splunkd.service: control process exited, code=exited status=1 Jan 12 15:18:16 pe-sec-idx-02 splunk[19155]: 2022-01-12 15:18:16.021 +0100 Interrupt signal received Jan 12 15:18:34 pe-sec-idx-02 systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. Jan 12 15:18:34 pe-sec-idx-02 systemd[1]: Unit Splunkd.service entered failed state. Jan 12 15:18:34 pe-sec-idx-02 systemd[1]: Splunkd.service failed. Despite the output, service stops successfully. As you can see, I added some instructions in the service unit file to put the indexer (which is part of a cluster) in manual detention before stopping it, and also it turns manual detention off once Splunk is started. I say again that stop/start commands work good, but in any case I get the above error messages when I stop the service. Am I doing something wrong? This is my service unit file: #This unit file replaces the traditional start-up script for systemd #configurations, and is used when enabling boot-start for Splunk on #systemd-based Linux distributions. [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network.target [Service] Type=simple Restart=always ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd ExecStartPost=/bin/sleep 60 ExecStartPost=/sbin/runuser -l splunk -c '/opt/splunk/bin/splunk edit cluster-config -manual_detention off -auth admin:D1c3mbr3Sec' ExecStop=/sbin/runuser -l splunk -c '/opt/splunk/bin/splunk edit cluster-config -manual_detention on -auth admin:D1c3mbr3Sec' ExecStop=/bin/sleep 10 ExecStop=/opt/splunk/bin/splunk _internal_launch_under_systemd LimitNOFILE=64000 LimitNPROC=16000 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunk Delegate=true CPUShares=1024 CPUQuota=1400% MemoryLimit=30G PermissionsStartOnly=true ExecStartPost=/bin/bash -c "chown -R 1001:1001 /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=/bin/bash -c "chown -R 1001:1001 /sys/fs/cgroup/memory/system.slice/%n" KillMode=mixed KillSignal=SIGINT TimeoutStopSec=10min [Install] WantedBy=multi-user.target  
Not getting data ofter configuring TCP 80 port in inputs.conf my stanza is like this [tcp://80] connection_host = dns index = port sourcetype = syslog can you give me any idea on this. thnks in... See more...
Not getting data ofter configuring TCP 80 port in inputs.conf my stanza is like this [tcp://80] connection_host = dns index = port sourcetype = syslog can you give me any idea on this. thnks in advance.
Is there any way to protect/obfuscate dashboard xml/scripts source?
hi all, i would like to ask if it is possible to include IF condition in the search query   if msg="Security Agent uninstallation*" [perform the below] | rex field=msg ":\s+\(*(?<result>[^)]+)" ... See more...
hi all, i would like to ask if it is possible to include IF condition in the search query   if msg="Security Agent uninstallation*" [perform the below] | rex field=msg ":\s+\(*(?<result>[^)]+)" | table _time msg result   if msg="Security Agent uninstallation command sent*" [perform the below] | rex field=msg "^[^;\n]*;\s+\w+:\s+(?P<endpoint>.+)" | table _time msg suser endpoint
hi, i want to extracted the first word from each variable the index has a field called search_name which has these variables:   Risk - 24 Hour Risk Threshold Exceeded - Rule Endpoint - machine wit... See more...
hi, i want to extracted the first word from each variable the index has a field called search_name which has these variables:   Risk - 24 Hour Risk Threshold Exceeded - Rule Endpoint - machine with possible malware - fffff Network - Possible SQL injection - Rule   i want to perform a regex to extracted the first word out of each variable so the output would be:   risk endpoint network         thanks ^_^
Who manages Splunk Captain and how?